00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 836 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3496 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.306 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.306 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.161 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.171 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.183 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:05.183 > git config core.sparsecheckout # timeout=10 00:00:05.194 > git read-tree -mu HEAD # timeout=10 00:00:05.209 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:05.226 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:05.226 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:05.305 [Pipeline] Start of Pipeline 00:00:05.317 [Pipeline] library 00:00:05.318 Loading library shm_lib@master 00:00:05.318 Library shm_lib@master is cached. Copying from home. 00:00:05.335 [Pipeline] node 00:00:05.345 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.346 [Pipeline] { 00:00:05.354 [Pipeline] catchError 00:00:05.355 [Pipeline] { 00:00:05.367 [Pipeline] wrap 00:00:05.374 [Pipeline] { 00:00:05.381 [Pipeline] stage 00:00:05.383 [Pipeline] { (Prologue) 00:00:05.579 [Pipeline] sh 00:00:05.863 + logger -p user.info -t JENKINS-CI 00:00:05.950 [Pipeline] echo 00:00:05.952 Node: GP11 00:00:05.962 [Pipeline] sh 00:00:06.266 [Pipeline] setCustomBuildProperty 00:00:06.273 [Pipeline] echo 00:00:06.274 Cleanup processes 00:00:06.277 [Pipeline] sh 00:00:06.557 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.557 665893 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.569 [Pipeline] sh 00:00:06.850 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.850 ++ grep -v 'sudo pgrep' 00:00:06.850 ++ awk '{print $1}' 00:00:06.850 + sudo kill -9 00:00:06.850 + true 00:00:06.865 [Pipeline] cleanWs 00:00:06.875 [WS-CLEANUP] Deleting project workspace... 00:00:06.875 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.882 [WS-CLEANUP] done 00:00:06.886 [Pipeline] setCustomBuildProperty 00:00:06.901 [Pipeline] sh 00:00:07.186 + sudo git config --global --replace-all safe.directory '*' 00:00:07.277 [Pipeline] httpRequest 00:00:08.428 [Pipeline] echo 00:00:08.429 Sorcerer 10.211.164.101 is alive 00:00:08.438 [Pipeline] retry 00:00:08.440 [Pipeline] { 00:00:08.451 [Pipeline] httpRequest 00:00:08.455 HttpMethod: GET 00:00:08.456 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.456 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.478 Response Code: HTTP/1.1 200 OK 00:00:08.479 Success: Status code 200 is in the accepted range: 200,404 00:00:08.479 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:15.259 [Pipeline] } 00:00:15.279 [Pipeline] // retry 00:00:15.289 [Pipeline] sh 00:00:15.577 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:15.594 [Pipeline] httpRequest 00:00:15.984 [Pipeline] echo 00:00:15.987 Sorcerer 10.211.164.101 is alive 00:00:15.998 [Pipeline] retry 00:00:16.000 [Pipeline] { 00:00:16.015 [Pipeline] httpRequest 00:00:16.020 HttpMethod: GET 00:00:16.020 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:16.021 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:16.041 Response Code: HTTP/1.1 200 OK 00:00:16.042 Success: Status code 200 is in the accepted range: 200,404 00:00:16.042 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:39.453 [Pipeline] } 00:01:39.468 [Pipeline] // retry 00:01:39.474 [Pipeline] sh 00:01:39.757 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:42.303 [Pipeline] sh 00:01:42.588 + git -C spdk log --oneline -n5 00:01:42.588 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:42.588 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:42.588 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:42.588 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:42.588 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:42.608 [Pipeline] withCredentials 00:01:42.620 > git --version # timeout=10 00:01:42.633 > git --version # 'git version 2.39.2' 00:01:42.652 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:42.655 [Pipeline] { 00:01:42.664 [Pipeline] retry 00:01:42.666 [Pipeline] { 00:01:42.681 [Pipeline] sh 00:01:42.967 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:42.980 [Pipeline] } 00:01:42.998 [Pipeline] // retry 00:01:43.005 [Pipeline] } 00:01:43.024 [Pipeline] // withCredentials 00:01:43.036 [Pipeline] httpRequest 00:01:43.452 [Pipeline] echo 00:01:43.454 Sorcerer 10.211.164.101 is alive 00:01:43.465 [Pipeline] retry 00:01:43.468 [Pipeline] { 00:01:43.484 [Pipeline] httpRequest 00:01:43.489 HttpMethod: GET 00:01:43.490 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:43.490 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:43.497 Response Code: HTTP/1.1 200 OK 00:01:43.498 Success: Status code 200 is in the accepted range: 200,404 00:01:43.498 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:50.906 [Pipeline] } 00:01:50.926 [Pipeline] // retry 00:01:50.935 [Pipeline] sh 00:01:51.221 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:53.140 [Pipeline] sh 00:01:53.424 + git -C dpdk log --oneline -n5 00:01:53.424 eeb0605f11 version: 23.11.0 00:01:53.424 238778122a doc: update release notes for 23.11 00:01:53.424 46aa6b3cfc doc: fix description of RSS features 00:01:53.424 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:53.424 7e421ae345 devtools: support skipping forbid rule check 00:01:53.434 [Pipeline] } 00:01:53.451 [Pipeline] // stage 00:01:53.463 [Pipeline] stage 00:01:53.465 [Pipeline] { (Prepare) 00:01:53.490 [Pipeline] writeFile 00:01:53.510 [Pipeline] sh 00:01:53.797 + logger -p user.info -t JENKINS-CI 00:01:53.811 [Pipeline] sh 00:01:54.098 + logger -p user.info -t JENKINS-CI 00:01:54.131 [Pipeline] sh 00:01:54.416 + cat autorun-spdk.conf 00:01:54.416 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.416 SPDK_TEST_NVMF=1 00:01:54.416 SPDK_TEST_NVME_CLI=1 00:01:54.416 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:54.416 SPDK_TEST_NVMF_NICS=e810 00:01:54.416 SPDK_TEST_VFIOUSER=1 00:01:54.416 SPDK_RUN_UBSAN=1 00:01:54.416 NET_TYPE=phy 00:01:54.416 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:54.416 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:54.424 RUN_NIGHTLY=1 00:01:54.428 [Pipeline] readFile 00:01:54.453 [Pipeline] withEnv 00:01:54.454 [Pipeline] { 00:01:54.466 [Pipeline] sh 00:01:54.753 + set -ex 00:01:54.754 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:54.754 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:54.754 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.754 ++ SPDK_TEST_NVMF=1 00:01:54.754 ++ SPDK_TEST_NVME_CLI=1 00:01:54.754 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:54.754 ++ SPDK_TEST_NVMF_NICS=e810 00:01:54.754 ++ SPDK_TEST_VFIOUSER=1 00:01:54.754 ++ SPDK_RUN_UBSAN=1 00:01:54.754 ++ NET_TYPE=phy 00:01:54.754 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:54.754 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:54.754 ++ RUN_NIGHTLY=1 00:01:54.754 + case $SPDK_TEST_NVMF_NICS in 00:01:54.754 + DRIVERS=ice 00:01:54.754 + [[ tcp == \r\d\m\a ]] 00:01:54.754 + [[ -n ice ]] 00:01:54.754 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:54.754 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:54.754 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:54.754 rmmod: ERROR: Module irdma is not currently loaded 00:01:54.754 rmmod: ERROR: Module i40iw is not currently loaded 00:01:54.754 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:54.754 + true 00:01:54.754 + for D in $DRIVERS 00:01:54.754 + sudo modprobe ice 00:01:54.754 + exit 0 00:01:54.763 [Pipeline] } 00:01:54.781 [Pipeline] // withEnv 00:01:54.786 [Pipeline] } 00:01:54.802 [Pipeline] // stage 00:01:54.812 [Pipeline] catchError 00:01:54.814 [Pipeline] { 00:01:54.827 [Pipeline] timeout 00:01:54.827 Timeout set to expire in 1 hr 0 min 00:01:54.829 [Pipeline] { 00:01:54.843 [Pipeline] stage 00:01:54.845 [Pipeline] { (Tests) 00:01:54.858 [Pipeline] sh 00:01:55.143 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:55.144 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:55.144 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:55.144 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:55.144 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.144 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:55.144 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:55.144 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:55.144 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:55.144 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:55.144 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:55.144 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:55.144 + source /etc/os-release 00:01:55.144 ++ NAME='Fedora Linux' 00:01:55.144 ++ VERSION='39 (Cloud Edition)' 00:01:55.144 ++ ID=fedora 00:01:55.144 ++ VERSION_ID=39 00:01:55.144 ++ VERSION_CODENAME= 00:01:55.144 ++ PLATFORM_ID=platform:f39 00:01:55.144 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:55.144 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:55.144 ++ LOGO=fedora-logo-icon 00:01:55.144 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:55.144 ++ HOME_URL=https://fedoraproject.org/ 00:01:55.144 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:55.144 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:55.144 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:55.144 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:55.144 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:55.144 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:55.144 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:55.144 ++ SUPPORT_END=2024-11-12 00:01:55.144 ++ VARIANT='Cloud Edition' 00:01:55.144 ++ VARIANT_ID=cloud 00:01:55.144 + uname -a 00:01:55.144 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:55.144 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:56.083 Hugepages 00:01:56.083 node hugesize free / total 00:01:56.083 node0 1048576kB 0 / 0 00:01:56.083 node0 2048kB 0 / 0 00:01:56.083 node1 1048576kB 0 / 0 00:01:56.083 node1 2048kB 0 / 0 00:01:56.083 00:01:56.083 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:56.083 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:56.083 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:56.083 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:56.342 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:56.342 + rm -f /tmp/spdk-ld-path 00:01:56.342 + source autorun-spdk.conf 00:01:56.342 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.342 ++ SPDK_TEST_NVMF=1 00:01:56.342 ++ SPDK_TEST_NVME_CLI=1 00:01:56.342 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:56.342 ++ SPDK_TEST_NVMF_NICS=e810 00:01:56.342 ++ SPDK_TEST_VFIOUSER=1 00:01:56.342 ++ SPDK_RUN_UBSAN=1 00:01:56.342 ++ NET_TYPE=phy 00:01:56.342 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.342 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.342 ++ RUN_NIGHTLY=1 00:01:56.342 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:56.342 + [[ -n '' ]] 00:01:56.342 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:56.342 + for M in /var/spdk/build-*-manifest.txt 00:01:56.342 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:56.342 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:56.342 + for M in /var/spdk/build-*-manifest.txt 00:01:56.342 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:56.342 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:56.342 + for M in /var/spdk/build-*-manifest.txt 00:01:56.342 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:56.342 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:56.342 ++ uname 00:01:56.342 + [[ Linux == \L\i\n\u\x ]] 00:01:56.342 + sudo dmesg -T 00:01:56.342 + sudo dmesg --clear 00:01:56.342 + dmesg_pid=667222 00:01:56.342 + [[ Fedora Linux == FreeBSD ]] 00:01:56.342 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.342 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.342 + sudo dmesg -Tw 00:01:56.342 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:56.342 + [[ -x /usr/src/fio-static/fio ]] 00:01:56.342 + export FIO_BIN=/usr/src/fio-static/fio 00:01:56.342 + FIO_BIN=/usr/src/fio-static/fio 00:01:56.342 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:56.342 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:56.342 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:56.342 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.342 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.342 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:56.342 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.342 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.342 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:56.342 Test configuration: 00:01:56.342 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.342 SPDK_TEST_NVMF=1 00:01:56.342 SPDK_TEST_NVME_CLI=1 00:01:56.342 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:56.342 SPDK_TEST_NVMF_NICS=e810 00:01:56.342 SPDK_TEST_VFIOUSER=1 00:01:56.342 SPDK_RUN_UBSAN=1 00:01:56.342 NET_TYPE=phy 00:01:56.342 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.342 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.342 RUN_NIGHTLY=1 01:19:36 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:56.342 01:19:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:56.342 01:19:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:56.342 01:19:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:56.342 01:19:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.342 01:19:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.342 01:19:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.342 01:19:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.342 01:19:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.342 01:19:36 -- paths/export.sh@5 -- $ export PATH 00:01:56.342 01:19:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.342 01:19:36 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:56.342 01:19:36 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:56.342 01:19:36 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727738376.XXXXXX 00:01:56.342 01:19:36 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727738376.d1OhrZ 00:01:56.342 01:19:36 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:56.342 01:19:36 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:01:56.342 01:19:36 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.343 01:19:36 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:56.343 01:19:36 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:56.343 01:19:36 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:56.343 01:19:36 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:56.343 01:19:36 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:56.343 01:19:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.343 01:19:36 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:56.343 01:19:36 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:56.343 01:19:36 -- pm/common@17 -- $ local monitor 00:01:56.343 01:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.343 01:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.343 01:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.343 01:19:36 -- pm/common@21 -- $ date +%s 00:01:56.343 01:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.343 01:19:36 -- pm/common@21 -- $ date +%s 00:01:56.343 01:19:36 -- pm/common@25 -- $ sleep 1 00:01:56.343 01:19:36 -- pm/common@21 -- $ date +%s 00:01:56.343 01:19:36 -- pm/common@21 -- $ date +%s 00:01:56.343 01:19:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727738376 00:01:56.343 01:19:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727738376 00:01:56.343 01:19:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727738376 00:01:56.343 01:19:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727738376 00:01:56.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727738376_collect-vmstat.pm.log 00:01:56.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727738376_collect-cpu-load.pm.log 00:01:56.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727738376_collect-cpu-temp.pm.log 00:01:56.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727738376_collect-bmc-pm.bmc.pm.log 00:01:57.727 01:19:37 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:57.727 01:19:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.727 01:19:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.727 01:19:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.727 01:19:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.727 Mon Sep 30 11:19:37 PM UTC 2024 00:01:57.727 01:19:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.727 v25.01-pre-17-g09cc66129 00:01:57.727 01:19:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:57.727 01:19:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.727 01:19:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.727 01:19:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.727 01:19:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.727 01:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.727 ************************************ 00:01:57.727 START TEST ubsan 00:01:57.727 ************************************ 00:01:57.727 01:19:37 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:57.727 using ubsan 00:01:57.727 00:01:57.727 real 0m0.000s 00:01:57.727 user 0m0.000s 00:01:57.727 sys 0m0.000s 00:01:57.727 01:19:37 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:57.727 01:19:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.727 ************************************ 00:01:57.727 END TEST ubsan 00:01:57.727 ************************************ 00:01:57.727 01:19:37 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:57.727 01:19:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:57.727 01:19:37 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:57.727 01:19:37 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:57.727 01:19:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.727 01:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.727 ************************************ 00:01:57.727 START TEST build_native_dpdk 00:01:57.727 ************************************ 00:01:57.727 01:19:37 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:57.727 01:19:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:57.728 eeb0605f11 version: 23.11.0 00:01:57.728 238778122a doc: update release notes for 23.11 00:01:57.728 46aa6b3cfc doc: fix description of RSS features 00:01:57.728 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:57.728 7e421ae345 devtools: support skipping forbid rule check 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:57.728 patching file config/rte_config.h 00:01:57.728 Hunk #1 succeeded at 60 (offset 1 line). 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:57.728 patching file lib/pcapng/rte_pcapng.c 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:57.728 01:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:57.728 01:19:37 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:01.920 The Meson build system 00:02:01.920 Version: 1.5.0 00:02:01.920 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:01.920 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:01.920 Build type: native build 00:02:01.920 Program cat found: YES (/usr/bin/cat) 00:02:01.920 Project name: DPDK 00:02:01.920 Project version: 23.11.0 00:02:01.920 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:01.920 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:01.920 Host machine cpu family: x86_64 00:02:01.920 Host machine cpu: x86_64 00:02:01.920 Message: ## Building in Developer Mode ## 00:02:01.920 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:01.920 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:01.920 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:01.920 Program python3 found: YES (/usr/bin/python3) 00:02:01.920 Program cat found: YES (/usr/bin/cat) 00:02:01.920 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:01.920 Compiler for C supports arguments -march=native: YES 00:02:01.920 Checking for size of "void *" : 8 00:02:01.920 Checking for size of "void *" : 8 (cached) 00:02:01.920 Library m found: YES 00:02:01.920 Library numa found: YES 00:02:01.920 Has header "numaif.h" : YES 00:02:01.920 Library fdt found: NO 00:02:01.920 Library execinfo found: NO 00:02:01.920 Has header "execinfo.h" : YES 00:02:01.920 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:01.920 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:01.920 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:01.920 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:01.920 Run-time dependency openssl found: YES 3.1.1 00:02:01.920 Run-time dependency libpcap found: YES 1.10.4 00:02:01.920 Has header "pcap.h" with dependency libpcap: YES 00:02:01.920 Compiler for C supports arguments -Wcast-qual: YES 00:02:01.920 Compiler for C supports arguments -Wdeprecated: YES 00:02:01.920 Compiler for C supports arguments -Wformat: YES 00:02:01.920 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:01.920 Compiler for C supports arguments -Wformat-security: NO 00:02:01.920 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:01.920 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:01.920 Compiler for C supports arguments -Wnested-externs: YES 00:02:01.920 Compiler for C supports arguments -Wold-style-definition: YES 00:02:01.920 Compiler for C supports arguments -Wpointer-arith: YES 00:02:01.920 Compiler for C supports arguments -Wsign-compare: YES 00:02:01.920 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:01.920 Compiler for C supports arguments -Wundef: YES 00:02:01.920 Compiler for C supports arguments -Wwrite-strings: YES 00:02:01.920 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:01.920 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:01.920 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:01.920 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:01.920 Program objdump found: YES (/usr/bin/objdump) 00:02:01.920 Compiler for C supports arguments -mavx512f: YES 00:02:01.920 Checking if "AVX512 checking" compiles: YES 00:02:01.920 Fetching value of define "__SSE4_2__" : 1 00:02:01.920 Fetching value of define "__AES__" : 1 00:02:01.920 Fetching value of define "__AVX__" : 1 00:02:01.920 Fetching value of define "__AVX2__" : (undefined) 00:02:01.920 Fetching value of define "__AVX512BW__" : (undefined) 00:02:01.920 Fetching value of define "__AVX512CD__" : (undefined) 00:02:01.920 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:01.920 Fetching value of define "__AVX512F__" : (undefined) 00:02:01.920 Fetching value of define "__AVX512VL__" : (undefined) 00:02:01.920 Fetching value of define "__PCLMUL__" : 1 00:02:01.920 Fetching value of define "__RDRND__" : 1 00:02:01.920 Fetching value of define "__RDSEED__" : (undefined) 00:02:01.920 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:01.920 Fetching value of define "__znver1__" : (undefined) 00:02:01.920 Fetching value of define "__znver2__" : (undefined) 00:02:01.920 Fetching value of define "__znver3__" : (undefined) 00:02:01.920 Fetching value of define "__znver4__" : (undefined) 00:02:01.920 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:01.920 Message: lib/log: Defining dependency "log" 00:02:01.920 Message: lib/kvargs: Defining dependency "kvargs" 00:02:01.920 Message: lib/telemetry: Defining dependency "telemetry" 00:02:01.920 Checking for function "getentropy" : NO 00:02:01.920 Message: lib/eal: Defining dependency "eal" 00:02:01.920 Message: lib/ring: Defining dependency "ring" 00:02:01.920 Message: lib/rcu: Defining dependency "rcu" 00:02:01.920 Message: lib/mempool: Defining dependency "mempool" 00:02:01.920 Message: lib/mbuf: Defining dependency "mbuf" 00:02:01.920 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:01.920 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.920 Compiler for C supports arguments -mpclmul: YES 00:02:01.920 Compiler for C supports arguments -maes: YES 00:02:01.920 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.920 Compiler for C supports arguments -mavx512bw: YES 00:02:01.920 Compiler for C supports arguments -mavx512dq: YES 00:02:01.920 Compiler for C supports arguments -mavx512vl: YES 00:02:01.920 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:01.920 Compiler for C supports arguments -mavx2: YES 00:02:01.920 Compiler for C supports arguments -mavx: YES 00:02:01.920 Message: lib/net: Defining dependency "net" 00:02:01.920 Message: lib/meter: Defining dependency "meter" 00:02:01.920 Message: lib/ethdev: Defining dependency "ethdev" 00:02:01.920 Message: lib/pci: Defining dependency "pci" 00:02:01.920 Message: lib/cmdline: Defining dependency "cmdline" 00:02:01.920 Message: lib/metrics: Defining dependency "metrics" 00:02:01.920 Message: lib/hash: Defining dependency "hash" 00:02:01.920 Message: lib/timer: Defining dependency "timer" 00:02:01.920 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.920 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:01.920 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:01.920 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:01.920 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:01.920 Message: lib/acl: Defining dependency "acl" 00:02:01.920 Message: lib/bbdev: Defining dependency "bbdev" 00:02:01.920 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:01.920 Run-time dependency libelf found: YES 0.191 00:02:01.920 Message: lib/bpf: Defining dependency "bpf" 00:02:01.920 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:01.920 Message: lib/compressdev: Defining dependency "compressdev" 00:02:01.920 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:01.920 Message: lib/distributor: Defining dependency "distributor" 00:02:01.920 Message: lib/dmadev: Defining dependency "dmadev" 00:02:01.920 Message: lib/efd: Defining dependency "efd" 00:02:01.920 Message: lib/eventdev: Defining dependency "eventdev" 00:02:01.920 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:01.920 Message: lib/gpudev: Defining dependency "gpudev" 00:02:01.920 Message: lib/gro: Defining dependency "gro" 00:02:01.920 Message: lib/gso: Defining dependency "gso" 00:02:01.920 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:01.920 Message: lib/jobstats: Defining dependency "jobstats" 00:02:01.920 Message: lib/latencystats: Defining dependency "latencystats" 00:02:01.920 Message: lib/lpm: Defining dependency "lpm" 00:02:01.920 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.920 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:01.920 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:01.920 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:01.920 Message: lib/member: Defining dependency "member" 00:02:01.920 Message: lib/pcapng: Defining dependency "pcapng" 00:02:01.920 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:01.920 Message: lib/power: Defining dependency "power" 00:02:01.920 Message: lib/rawdev: Defining dependency "rawdev" 00:02:01.920 Message: lib/regexdev: Defining dependency "regexdev" 00:02:01.920 Message: lib/mldev: Defining dependency "mldev" 00:02:01.920 Message: lib/rib: Defining dependency "rib" 00:02:01.920 Message: lib/reorder: Defining dependency "reorder" 00:02:01.920 Message: lib/sched: Defining dependency "sched" 00:02:01.920 Message: lib/security: Defining dependency "security" 00:02:01.920 Message: lib/stack: Defining dependency "stack" 00:02:01.920 Has header "linux/userfaultfd.h" : YES 00:02:01.920 Has header "linux/vduse.h" : YES 00:02:01.920 Message: lib/vhost: Defining dependency "vhost" 00:02:01.920 Message: lib/ipsec: Defining dependency "ipsec" 00:02:01.920 Message: lib/pdcp: Defining dependency "pdcp" 00:02:01.920 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.920 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:01.920 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:01.920 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:01.920 Message: lib/fib: Defining dependency "fib" 00:02:01.920 Message: lib/port: Defining dependency "port" 00:02:01.920 Message: lib/pdump: Defining dependency "pdump" 00:02:01.920 Message: lib/table: Defining dependency "table" 00:02:01.920 Message: lib/pipeline: Defining dependency "pipeline" 00:02:01.920 Message: lib/graph: Defining dependency "graph" 00:02:01.920 Message: lib/node: Defining dependency "node" 00:02:03.833 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.833 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.833 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.833 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.833 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:03.833 Compiler for C supports arguments -Wno-unused-value: YES 00:02:03.833 Compiler for C supports arguments -Wno-format: YES 00:02:03.833 Compiler for C supports arguments -Wno-format-security: YES 00:02:03.833 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:03.833 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:03.833 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:03.833 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:03.833 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.833 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.833 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:03.833 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:03.833 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:03.833 Has header "sys/epoll.h" : YES 00:02:03.833 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.833 Configuring doxy-api-html.conf using configuration 00:02:03.833 Configuring doxy-api-man.conf using configuration 00:02:03.833 Program mandb found: YES (/usr/bin/mandb) 00:02:03.833 Program sphinx-build found: NO 00:02:03.833 Configuring rte_build_config.h using configuration 00:02:03.833 Message: 00:02:03.833 ================= 00:02:03.833 Applications Enabled 00:02:03.833 ================= 00:02:03.833 00:02:03.833 apps: 00:02:03.833 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:03.833 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:03.833 test-pmd, test-regex, test-sad, test-security-perf, 00:02:03.833 00:02:03.833 Message: 00:02:03.833 ================= 00:02:03.833 Libraries Enabled 00:02:03.833 ================= 00:02:03.833 00:02:03.833 libs: 00:02:03.833 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.833 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:03.833 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:03.833 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:03.833 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:03.833 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:03.833 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:03.833 00:02:03.833 00:02:03.833 Message: 00:02:03.833 =============== 00:02:03.833 Drivers Enabled 00:02:03.833 =============== 00:02:03.833 00:02:03.833 common: 00:02:03.833 00:02:03.833 bus: 00:02:03.833 pci, vdev, 00:02:03.833 mempool: 00:02:03.833 ring, 00:02:03.833 dma: 00:02:03.833 00:02:03.833 net: 00:02:03.833 i40e, 00:02:03.833 raw: 00:02:03.833 00:02:03.833 crypto: 00:02:03.833 00:02:03.833 compress: 00:02:03.833 00:02:03.833 regex: 00:02:03.833 00:02:03.833 ml: 00:02:03.833 00:02:03.833 vdpa: 00:02:03.833 00:02:03.833 event: 00:02:03.833 00:02:03.833 baseband: 00:02:03.833 00:02:03.833 gpu: 00:02:03.833 00:02:03.833 00:02:03.833 Message: 00:02:03.833 ================= 00:02:03.833 Content Skipped 00:02:03.833 ================= 00:02:03.833 00:02:03.833 apps: 00:02:03.833 00:02:03.833 libs: 00:02:03.833 00:02:03.833 drivers: 00:02:03.833 common/cpt: not in enabled drivers build config 00:02:03.833 common/dpaax: not in enabled drivers build config 00:02:03.833 common/iavf: not in enabled drivers build config 00:02:03.833 common/idpf: not in enabled drivers build config 00:02:03.833 common/mvep: not in enabled drivers build config 00:02:03.833 common/octeontx: not in enabled drivers build config 00:02:03.833 bus/auxiliary: not in enabled drivers build config 00:02:03.833 bus/cdx: not in enabled drivers build config 00:02:03.833 bus/dpaa: not in enabled drivers build config 00:02:03.833 bus/fslmc: not in enabled drivers build config 00:02:03.833 bus/ifpga: not in enabled drivers build config 00:02:03.833 bus/platform: not in enabled drivers build config 00:02:03.833 bus/vmbus: not in enabled drivers build config 00:02:03.833 common/cnxk: not in enabled drivers build config 00:02:03.833 common/mlx5: not in enabled drivers build config 00:02:03.833 common/nfp: not in enabled drivers build config 00:02:03.833 common/qat: not in enabled drivers build config 00:02:03.833 common/sfc_efx: not in enabled drivers build config 00:02:03.833 mempool/bucket: not in enabled drivers build config 00:02:03.833 mempool/cnxk: not in enabled drivers build config 00:02:03.833 mempool/dpaa: not in enabled drivers build config 00:02:03.833 mempool/dpaa2: not in enabled drivers build config 00:02:03.833 mempool/octeontx: not in enabled drivers build config 00:02:03.833 mempool/stack: not in enabled drivers build config 00:02:03.833 dma/cnxk: not in enabled drivers build config 00:02:03.833 dma/dpaa: not in enabled drivers build config 00:02:03.833 dma/dpaa2: not in enabled drivers build config 00:02:03.833 dma/hisilicon: not in enabled drivers build config 00:02:03.833 dma/idxd: not in enabled drivers build config 00:02:03.833 dma/ioat: not in enabled drivers build config 00:02:03.833 dma/skeleton: not in enabled drivers build config 00:02:03.833 net/af_packet: not in enabled drivers build config 00:02:03.833 net/af_xdp: not in enabled drivers build config 00:02:03.833 net/ark: not in enabled drivers build config 00:02:03.833 net/atlantic: not in enabled drivers build config 00:02:03.833 net/avp: not in enabled drivers build config 00:02:03.833 net/axgbe: not in enabled drivers build config 00:02:03.833 net/bnx2x: not in enabled drivers build config 00:02:03.833 net/bnxt: not in enabled drivers build config 00:02:03.834 net/bonding: not in enabled drivers build config 00:02:03.834 net/cnxk: not in enabled drivers build config 00:02:03.834 net/cpfl: not in enabled drivers build config 00:02:03.834 net/cxgbe: not in enabled drivers build config 00:02:03.834 net/dpaa: not in enabled drivers build config 00:02:03.834 net/dpaa2: not in enabled drivers build config 00:02:03.834 net/e1000: not in enabled drivers build config 00:02:03.834 net/ena: not in enabled drivers build config 00:02:03.834 net/enetc: not in enabled drivers build config 00:02:03.834 net/enetfec: not in enabled drivers build config 00:02:03.834 net/enic: not in enabled drivers build config 00:02:03.834 net/failsafe: not in enabled drivers build config 00:02:03.834 net/fm10k: not in enabled drivers build config 00:02:03.834 net/gve: not in enabled drivers build config 00:02:03.834 net/hinic: not in enabled drivers build config 00:02:03.834 net/hns3: not in enabled drivers build config 00:02:03.834 net/iavf: not in enabled drivers build config 00:02:03.834 net/ice: not in enabled drivers build config 00:02:03.834 net/idpf: not in enabled drivers build config 00:02:03.834 net/igc: not in enabled drivers build config 00:02:03.834 net/ionic: not in enabled drivers build config 00:02:03.834 net/ipn3ke: not in enabled drivers build config 00:02:03.834 net/ixgbe: not in enabled drivers build config 00:02:03.834 net/mana: not in enabled drivers build config 00:02:03.834 net/memif: not in enabled drivers build config 00:02:03.834 net/mlx4: not in enabled drivers build config 00:02:03.834 net/mlx5: not in enabled drivers build config 00:02:03.834 net/mvneta: not in enabled drivers build config 00:02:03.834 net/mvpp2: not in enabled drivers build config 00:02:03.834 net/netvsc: not in enabled drivers build config 00:02:03.834 net/nfb: not in enabled drivers build config 00:02:03.834 net/nfp: not in enabled drivers build config 00:02:03.834 net/ngbe: not in enabled drivers build config 00:02:03.834 net/null: not in enabled drivers build config 00:02:03.834 net/octeontx: not in enabled drivers build config 00:02:03.834 net/octeon_ep: not in enabled drivers build config 00:02:03.834 net/pcap: not in enabled drivers build config 00:02:03.834 net/pfe: not in enabled drivers build config 00:02:03.834 net/qede: not in enabled drivers build config 00:02:03.834 net/ring: not in enabled drivers build config 00:02:03.834 net/sfc: not in enabled drivers build config 00:02:03.834 net/softnic: not in enabled drivers build config 00:02:03.834 net/tap: not in enabled drivers build config 00:02:03.834 net/thunderx: not in enabled drivers build config 00:02:03.834 net/txgbe: not in enabled drivers build config 00:02:03.834 net/vdev_netvsc: not in enabled drivers build config 00:02:03.834 net/vhost: not in enabled drivers build config 00:02:03.834 net/virtio: not in enabled drivers build config 00:02:03.834 net/vmxnet3: not in enabled drivers build config 00:02:03.834 raw/cnxk_bphy: not in enabled drivers build config 00:02:03.834 raw/cnxk_gpio: not in enabled drivers build config 00:02:03.834 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:03.834 raw/ifpga: not in enabled drivers build config 00:02:03.834 raw/ntb: not in enabled drivers build config 00:02:03.834 raw/skeleton: not in enabled drivers build config 00:02:03.834 crypto/armv8: not in enabled drivers build config 00:02:03.834 crypto/bcmfs: not in enabled drivers build config 00:02:03.834 crypto/caam_jr: not in enabled drivers build config 00:02:03.834 crypto/ccp: not in enabled drivers build config 00:02:03.834 crypto/cnxk: not in enabled drivers build config 00:02:03.834 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.834 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.834 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.834 crypto/mlx5: not in enabled drivers build config 00:02:03.834 crypto/mvsam: not in enabled drivers build config 00:02:03.834 crypto/nitrox: not in enabled drivers build config 00:02:03.834 crypto/null: not in enabled drivers build config 00:02:03.834 crypto/octeontx: not in enabled drivers build config 00:02:03.834 crypto/openssl: not in enabled drivers build config 00:02:03.834 crypto/scheduler: not in enabled drivers build config 00:02:03.834 crypto/uadk: not in enabled drivers build config 00:02:03.834 crypto/virtio: not in enabled drivers build config 00:02:03.834 compress/isal: not in enabled drivers build config 00:02:03.834 compress/mlx5: not in enabled drivers build config 00:02:03.834 compress/octeontx: not in enabled drivers build config 00:02:03.834 compress/zlib: not in enabled drivers build config 00:02:03.834 regex/mlx5: not in enabled drivers build config 00:02:03.834 regex/cn9k: not in enabled drivers build config 00:02:03.834 ml/cnxk: not in enabled drivers build config 00:02:03.834 vdpa/ifc: not in enabled drivers build config 00:02:03.834 vdpa/mlx5: not in enabled drivers build config 00:02:03.834 vdpa/nfp: not in enabled drivers build config 00:02:03.834 vdpa/sfc: not in enabled drivers build config 00:02:03.834 event/cnxk: not in enabled drivers build config 00:02:03.834 event/dlb2: not in enabled drivers build config 00:02:03.834 event/dpaa: not in enabled drivers build config 00:02:03.834 event/dpaa2: not in enabled drivers build config 00:02:03.834 event/dsw: not in enabled drivers build config 00:02:03.834 event/opdl: not in enabled drivers build config 00:02:03.834 event/skeleton: not in enabled drivers build config 00:02:03.834 event/sw: not in enabled drivers build config 00:02:03.834 event/octeontx: not in enabled drivers build config 00:02:03.834 baseband/acc: not in enabled drivers build config 00:02:03.834 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:03.834 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:03.834 baseband/la12xx: not in enabled drivers build config 00:02:03.834 baseband/null: not in enabled drivers build config 00:02:03.834 baseband/turbo_sw: not in enabled drivers build config 00:02:03.834 gpu/cuda: not in enabled drivers build config 00:02:03.834 00:02:03.834 00:02:03.834 Build targets in project: 220 00:02:03.834 00:02:03.834 DPDK 23.11.0 00:02:03.834 00:02:03.834 User defined options 00:02:03.834 libdir : lib 00:02:03.834 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:03.834 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:03.834 c_link_args : 00:02:03.834 enable_docs : false 00:02:03.834 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:03.834 enable_kmods : false 00:02:03.834 machine : native 00:02:03.834 tests : false 00:02:03.834 00:02:03.834 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.834 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:03.834 01:19:43 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:03.834 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:03.834 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.834 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.834 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.834 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.834 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.834 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.834 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.834 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.834 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.834 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.098 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.098 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.098 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.098 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.098 [15/710] Linking static target lib/librte_kvargs.a 00:02:04.098 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.098 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.098 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.098 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.098 [20/710] Linking static target lib/librte_log.a 00:02:04.362 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.362 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.936 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.936 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.936 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.936 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.936 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.936 [28/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.936 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.936 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.936 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.936 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.936 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.936 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.936 [35/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.936 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.936 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.936 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.936 [39/710] Linking target lib/librte_log.so.24.0 00:02:04.936 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.936 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.936 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.936 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.201 [44/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.201 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.201 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.201 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.201 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.201 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.201 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.201 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.201 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.201 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.201 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.201 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.201 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.201 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.201 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.201 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.201 [60/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:05.201 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.201 [62/710] Linking target lib/librte_kvargs.so.24.0 00:02:05.464 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.464 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.464 [65/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.464 [66/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:05.728 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.728 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.728 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.728 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.728 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:05.728 [72/710] Linking static target lib/librte_pci.a 00:02:05.728 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.990 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.991 [75/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.991 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.991 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.991 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.991 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.991 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.991 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.259 [82/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.259 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.259 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.259 [85/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.259 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.259 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.259 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.259 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.259 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.259 [91/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.259 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.259 [93/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.259 [94/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.259 [95/710] Linking static target lib/librte_ring.a 00:02:06.259 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.259 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.259 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.259 [99/710] Linking static target lib/librte_meter.a 00:02:06.259 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.553 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.553 [102/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.553 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:06.553 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:06.553 [105/710] Linking static target lib/librte_telemetry.a 00:02:06.553 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.553 [107/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.553 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.553 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.553 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.553 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.553 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.553 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.859 [114/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.859 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:06.859 [116/710] Linking static target lib/librte_eal.a 00:02:06.859 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.859 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.859 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.859 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:06.859 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.859 [122/710] Linking static target lib/librte_net.a 00:02:06.859 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:06.859 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.167 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.167 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.167 [127/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.167 [128/710] Linking static target lib/librte_cmdline.a 00:02:07.167 [129/710] Linking static target lib/librte_mempool.a 00:02:07.167 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.167 [131/710] Linking target lib/librte_telemetry.so.24.0 00:02:07.167 [132/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:07.442 [133/710] Linking static target lib/librte_cfgfile.a 00:02:07.442 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.442 [135/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:07.442 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.442 [137/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.442 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:07.442 [139/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.442 [140/710] Linking static target lib/librte_metrics.a 00:02:07.442 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.442 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:07.442 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:07.705 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:07.705 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:07.705 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.705 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:07.705 [148/710] Linking static target lib/librte_rcu.a 00:02:07.705 [149/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:07.705 [150/710] Linking static target lib/librte_bitratestats.a 00:02:07.705 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:07.705 [152/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.705 [153/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:07.968 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:07.968 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.968 [156/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.968 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:07.968 [158/710] Linking static target lib/librte_timer.a 00:02:07.968 [159/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.968 [160/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:07.968 [161/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.968 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.968 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.230 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.230 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.230 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:08.230 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:08.497 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:08.497 [169/710] Linking static target lib/librte_bbdev.a 00:02:08.497 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.497 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.497 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.497 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.497 [174/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.497 [175/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:08.762 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.762 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.762 [178/710] Linking static target lib/librte_compressdev.a 00:02:08.762 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:08.762 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:08.762 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:09.025 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.025 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:09.025 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:09.025 [185/710] Linking static target lib/librte_distributor.a 00:02:09.025 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:09.287 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.287 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:09.287 [189/710] Linking static target lib/librte_dmadev.a 00:02:09.287 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:09.288 [191/710] Linking static target lib/librte_bpf.a 00:02:09.288 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:09.553 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.553 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:09.553 [195/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.553 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:09.553 [197/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:09.553 [198/710] Linking static target lib/librte_dispatcher.a 00:02:09.553 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:09.553 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:09.553 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:09.818 [202/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:09.818 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:09.818 [204/710] Linking static target lib/librte_gpudev.a 00:02:09.818 [205/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:09.818 [206/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:09.818 [207/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:09.818 [208/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:09.818 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:09.818 [210/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:09.818 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:09.818 [212/710] Linking static target lib/librte_gro.a 00:02:09.818 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.818 [214/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.818 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:10.082 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.082 [217/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:10.082 [218/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:10.082 [219/710] Linking static target lib/librte_jobstats.a 00:02:10.082 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:10.343 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.343 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.343 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:10.343 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:10.607 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:10.608 [226/710] Linking static target lib/librte_latencystats.a 00:02:10.608 [227/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:10.608 [228/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:10.608 [229/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:10.608 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:10.608 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:10.608 [232/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.608 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:10.869 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:10.869 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:10.869 [236/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:10.869 [237/710] Linking static target lib/librte_ip_frag.a 00:02:10.869 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.869 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.869 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:11.131 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.131 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:11.131 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.131 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:11.131 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.131 [246/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:11.131 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:11.392 [248/710] Linking static target lib/librte_gso.a 00:02:11.392 [249/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.392 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:11.392 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.392 [252/710] Linking static target lib/librte_regexdev.a 00:02:11.392 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:11.392 [254/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:11.392 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:11.392 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:11.655 [257/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:11.655 [258/710] Linking static target lib/librte_rawdev.a 00:02:11.655 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.655 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:11.655 [261/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:11.655 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:11.655 [263/710] Linking static target lib/librte_efd.a 00:02:11.655 [264/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:11.655 [265/710] Linking static target lib/librte_mldev.a 00:02:11.916 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:11.916 [267/710] Linking static target lib/librte_pcapng.a 00:02:11.916 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:11.916 [269/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:11.916 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:11.916 [271/710] Linking static target lib/librte_lpm.a 00:02:11.916 [272/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:11.916 [273/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:11.916 [274/710] Linking static target lib/librte_stack.a 00:02:11.916 [275/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:11.916 [276/710] Linking static target lib/acl/libavx2_tmp.a 00:02:12.178 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.178 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.178 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.178 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.178 [281/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.178 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.178 [283/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.443 [284/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:12.443 [285/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.443 [286/710] Linking static target lib/librte_hash.a 00:02:12.443 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.443 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.443 [289/710] Linking static target lib/librte_reorder.a 00:02:12.443 [290/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.443 [291/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.443 [292/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:12.443 [293/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.443 [294/710] Linking static target lib/acl/libavx512_tmp.a 00:02:12.443 [295/710] Linking static target lib/librte_power.a 00:02:12.443 [296/710] Linking static target lib/librte_acl.a 00:02:12.443 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.703 [298/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.703 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.703 [300/710] Linking static target lib/librte_security.a 00:02:12.703 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:12.968 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:12.968 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.968 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.968 [305/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:12.968 [306/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:12.968 [307/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.968 [308/710] Linking static target lib/librte_mbuf.a 00:02:12.968 [309/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.968 [310/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:12.968 [311/710] Linking static target lib/librte_rib.a 00:02:12.968 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:13.230 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:13.230 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:13.230 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:13.230 [316/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.230 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:13.491 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.491 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:13.491 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:13.491 [321/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:13.491 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:13.491 [323/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.491 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:13.491 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:13.491 [326/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:13.754 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.754 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.754 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.754 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:13.754 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:13.754 [332/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.016 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:14.277 [334/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:14.277 [335/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:14.277 [336/710] Linking static target lib/librte_member.a 00:02:14.277 [337/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:14.277 [338/710] Linking static target lib/librte_eventdev.a 00:02:14.538 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.538 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.538 [341/710] Linking static target lib/librte_cryptodev.a 00:02:14.538 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:14.538 [343/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:14.538 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:14.538 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:14.538 [346/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:14.538 [347/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:14.538 [348/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:14.803 [349/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:14.803 [350/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.803 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:14.803 [352/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:14.803 [353/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:14.803 [354/710] Linking static target lib/librte_ethdev.a 00:02:14.803 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:14.803 [356/710] Linking static target lib/librte_sched.a 00:02:14.803 [357/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:14.803 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:14.803 [359/710] Linking static target lib/librte_fib.a 00:02:14.803 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:15.070 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:15.070 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:15.070 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:15.070 [364/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:15.070 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:15.070 [366/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:15.337 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.337 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:15.337 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:15.337 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.337 [371/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:15.337 [372/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.598 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:15.598 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:15.598 [375/710] Linking static target lib/librte_pdump.a 00:02:15.864 [376/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:15.864 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:15.864 [378/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:15.864 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:15.864 [380/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.864 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:15.864 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:15.864 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:15.864 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:16.122 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:16.122 [386/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.122 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:16.122 [388/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.122 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:16.122 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:16.122 [391/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:16.122 [392/710] Linking static target lib/librte_ipsec.a 00:02:16.380 [393/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:16.380 [394/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:16.380 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:16.380 [396/710] Linking static target lib/librte_table.a 00:02:16.380 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.643 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:16.643 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:16.643 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:16.643 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.906 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:17.166 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:17.166 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.166 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:17.166 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:17.430 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:17.430 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.430 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.430 [410/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:17.430 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.430 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.430 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:17.430 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.694 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.694 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:17.694 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.694 [418/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.694 [419/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.694 [420/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.959 [421/710] Linking target lib/librte_eal.so.24.0 00:02:17.959 [422/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:17.959 [423/710] Linking static target lib/librte_port.a 00:02:17.959 [424/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:17.959 [425/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.959 [426/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.959 [427/710] Linking static target drivers/librte_bus_vdev.a 00:02:17.959 [428/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:17.959 [429/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.221 [430/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:18.221 [431/710] Linking target lib/librte_ring.so.24.0 00:02:18.221 [432/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:18.221 [433/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.221 [434/710] Linking target lib/librte_meter.so.24.0 00:02:18.221 [435/710] Linking target lib/librte_pci.so.24.0 00:02:18.221 [436/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:18.482 [437/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.482 [438/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:18.482 [439/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:18.482 [440/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:18.482 [441/710] Linking target lib/librte_timer.so.24.0 00:02:18.482 [442/710] Linking target lib/librte_acl.so.24.0 00:02:18.482 [443/710] Linking target lib/librte_cfgfile.so.24.0 00:02:18.482 [444/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:18.482 [445/710] Linking target lib/librte_dmadev.so.24.0 00:02:18.483 [446/710] Linking target lib/librte_rcu.so.24.0 00:02:18.483 [447/710] Linking target lib/librte_mempool.so.24.0 00:02:18.483 [448/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:18.483 [449/710] Linking target lib/librte_jobstats.so.24.0 00:02:18.483 [450/710] Linking target lib/librte_rawdev.so.24.0 00:02:18.483 [451/710] Linking static target lib/librte_graph.a 00:02:18.483 [452/710] Linking target lib/librte_stack.so.24.0 00:02:18.745 [453/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.745 [454/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.745 [455/710] Linking static target drivers/librte_bus_pci.a 00:02:18.745 [456/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:18.745 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.745 [458/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.745 [459/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:18.745 [460/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:18.745 [461/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:18.745 [462/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:18.745 [463/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:18.745 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:18.745 [465/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:18.745 [466/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.745 [467/710] Linking target lib/librte_mbuf.so.24.0 00:02:18.745 [468/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:18.745 [469/710] Linking target lib/librte_rib.so.24.0 00:02:19.012 [470/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:19.012 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:19.012 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:19.012 [473/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:19.012 [474/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:19.012 [475/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.272 [476/710] Linking target lib/librte_net.so.24.0 00:02:19.272 [477/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:19.272 [478/710] Linking target lib/librte_fib.so.24.0 00:02:19.272 [479/710] Linking target lib/librte_bbdev.so.24.0 00:02:19.272 [480/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:19.272 [481/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:19.272 [482/710] Linking target lib/librte_compressdev.so.24.0 00:02:19.272 [483/710] Linking target lib/librte_cryptodev.so.24.0 00:02:19.272 [484/710] Linking target lib/librte_distributor.so.24.0 00:02:19.272 [485/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:19.272 [486/710] Linking target lib/librte_gpudev.so.24.0 00:02:19.272 [487/710] Linking target lib/librte_regexdev.so.24.0 00:02:19.272 [488/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.272 [489/710] Linking target lib/librte_mldev.so.24.0 00:02:19.272 [490/710] Linking static target drivers/librte_mempool_ring.a 00:02:19.272 [491/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.272 [492/710] Linking target lib/librte_reorder.so.24.0 00:02:19.272 [493/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:19.272 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:19.272 [495/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:19.272 [496/710] Linking target lib/librte_sched.so.24.0 00:02:19.537 [497/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:19.537 [498/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:19.537 [499/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:19.537 [500/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:19.537 [501/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:19.537 [502/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:19.537 [503/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.537 [504/710] Linking target lib/librte_hash.so.24.0 00:02:19.537 [505/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:19.537 [506/710] Linking target lib/librte_cmdline.so.24.0 00:02:19.537 [507/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:19.537 [508/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.537 [509/710] Linking target lib/librte_security.so.24.0 00:02:19.537 [510/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:19.537 [511/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:19.537 [512/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:19.798 [513/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:19.798 [514/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:19.798 [515/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:19.798 [516/710] Linking target lib/librte_efd.so.24.0 00:02:19.798 [517/710] Linking target lib/librte_member.so.24.0 00:02:19.798 [518/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:19.798 [519/710] Linking target lib/librte_lpm.so.24.0 00:02:20.056 [520/710] Linking target lib/librte_ipsec.so.24.0 00:02:20.056 [521/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:20.056 [522/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:20.056 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:20.057 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:20.317 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:20.317 [526/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:20.317 [527/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:20.317 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:20.582 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:20.582 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:20.582 [531/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:20.582 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:20.843 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:20.843 [534/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:20.843 [535/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:20.843 [536/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:20.843 [537/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:21.113 [538/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:21.113 [539/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:21.113 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:21.113 [541/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:21.113 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:21.374 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:21.374 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:21.374 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:21.640 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:21.640 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:21.640 [548/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:21.640 [549/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:21.640 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:21.640 [551/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:21.640 [552/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:21.640 [553/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:21.640 [554/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:21.903 [555/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:21.903 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:21.903 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:22.165 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:22.165 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:22.428 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:22.693 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:22.693 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:22.693 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:22.693 [564/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:22.958 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:22.958 [566/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:22.958 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:22.958 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:22.958 [569/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.958 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:23.222 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:23.222 [572/710] Linking target lib/librte_ethdev.so.24.0 00:02:23.222 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:23.222 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:23.484 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:23.484 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:23.484 [577/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:23.484 [578/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:23.484 [579/710] Linking target lib/librte_metrics.so.24.0 00:02:23.484 [580/710] Linking target lib/librte_bpf.so.24.0 00:02:23.484 [581/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:23.484 [582/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:23.746 [583/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:23.746 [584/710] Linking target lib/librte_gro.so.24.0 00:02:23.746 [585/710] Linking target lib/librte_eventdev.so.24.0 00:02:23.746 [586/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:23.746 [587/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:23.746 [588/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:23.746 [589/710] Linking static target lib/librte_pdcp.a 00:02:23.746 [590/710] Linking target lib/librte_gso.so.24.0 00:02:23.746 [591/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:23.746 [592/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:23.746 [593/710] Linking target lib/librte_ip_frag.so.24.0 00:02:23.746 [594/710] Linking target lib/librte_pcapng.so.24.0 00:02:23.746 [595/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:23.746 [596/710] Linking target lib/librte_power.so.24.0 00:02:23.746 [597/710] Linking target lib/librte_bitratestats.so.24.0 00:02:24.008 [598/710] Linking target lib/librte_latencystats.so.24.0 00:02:24.008 [599/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:24.008 [600/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:24.008 [601/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:24.008 [602/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:24.008 [603/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:24.008 [604/710] Linking target lib/librte_dispatcher.so.24.0 00:02:24.008 [605/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:24.008 [606/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:24.008 [607/710] Linking target lib/librte_port.so.24.0 00:02:24.008 [608/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:24.008 [609/710] Linking target lib/librte_pdump.so.24.0 00:02:24.008 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:24.266 [611/710] Linking target lib/librte_graph.so.24.0 00:02:24.266 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:24.266 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:24.266 [614/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.266 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:24.266 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:24.266 [617/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:24.528 [618/710] Linking target lib/librte_pdcp.so.24.0 00:02:24.528 [619/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:24.528 [620/710] Linking target lib/librte_table.so.24.0 00:02:24.528 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:24.528 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:24.788 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:24.788 [624/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:24.788 [625/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:24.788 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:24.788 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:24.788 [628/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:24.788 [629/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:25.049 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:25.309 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:25.309 [632/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:25.309 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:25.309 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:25.309 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:25.567 [636/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:25.567 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:25.568 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:25.568 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:25.568 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:25.826 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:25.826 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:25.826 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:25.826 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:25.826 [645/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:26.085 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:26.085 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:26.085 [648/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:26.343 [649/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:26.343 [650/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:26.343 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:26.343 [652/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:26.602 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:26.602 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:26.602 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:26.602 [656/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:26.602 [657/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:26.602 [658/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:26.859 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:26.859 [660/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:26.859 [661/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:26.859 [662/710] Linking static target drivers/librte_net_i40e.a 00:02:27.117 [663/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:27.117 [664/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:27.375 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:27.375 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.375 [667/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:27.633 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:27.633 [669/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:27.891 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:28.148 [671/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:28.406 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:28.406 [673/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:28.406 [674/710] Linking static target lib/librte_node.a 00:02:28.664 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.664 [676/710] Linking target lib/librte_node.so.24.0 00:02:29.598 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:30.164 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:30.164 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:32.066 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:32.325 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:37.593 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.723 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.723 [684/710] Linking static target lib/librte_vhost.a 00:03:09.723 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.723 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:21.925 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.183 [688/710] Linking static target lib/librte_pipeline.a 00:03:22.749 [689/710] Linking target app/dpdk-dumpcap 00:03:22.749 [690/710] Linking target app/dpdk-test-dma-perf 00:03:22.749 [691/710] Linking target app/dpdk-test-fib 00:03:22.749 [692/710] Linking target app/dpdk-test-gpudev 00:03:22.749 [693/710] Linking target app/dpdk-test-security-perf 00:03:22.749 [694/710] Linking target app/dpdk-test-compress-perf 00:03:22.749 [695/710] Linking target app/dpdk-test-mldev 00:03:22.749 [696/710] Linking target app/dpdk-test-sad 00:03:22.749 [697/710] Linking target app/dpdk-test-cmdline 00:03:22.749 [698/710] Linking target app/dpdk-test-flow-perf 00:03:22.749 [699/710] Linking target app/dpdk-proc-info 00:03:22.749 [700/710] Linking target app/dpdk-pdump 00:03:22.749 [701/710] Linking target app/dpdk-test-regex 00:03:22.749 [702/710] Linking target app/dpdk-graph 00:03:22.749 [703/710] Linking target app/dpdk-test-pipeline 00:03:22.749 [704/710] Linking target app/dpdk-test-eventdev 00:03:22.749 [705/710] Linking target app/dpdk-test-bbdev 00:03:22.749 [706/710] Linking target app/dpdk-test-crypto-perf 00:03:22.749 [707/710] Linking target app/dpdk-test-acl 00:03:23.007 [708/710] Linking target app/dpdk-testpmd 00:03:24.907 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.907 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:24.907 01:21:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:24.907 01:21:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:24.907 01:21:04 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:24.907 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:24.907 [0/1] Installing files. 00:03:25.167 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:25.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:25.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:25.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:25.434 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.434 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:25.435 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.006 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.006 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.007 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.007 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.007 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.007 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.007 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.007 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.007 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.008 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.009 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.010 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:26.011 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:26.011 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:26.011 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:26.011 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:26.011 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:26.011 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:26.011 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:26.011 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:26.011 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:26.011 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:26.011 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:26.011 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:26.011 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:26.011 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:26.011 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:26.011 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:26.011 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:26.011 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:26.011 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:26.011 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:26.011 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:26.011 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:26.011 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:26.011 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:26.011 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:26.011 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:26.011 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:26.011 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:26.011 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:26.011 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:26.011 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:26.011 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:26.011 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:26.011 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:26.011 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:26.011 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:26.011 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:26.011 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:26.011 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:26.011 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:26.011 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:26.011 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:26.011 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:26.011 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:26.011 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:26.011 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:26.011 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:26.011 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:26.011 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:26.011 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:26.011 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:26.011 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:26.011 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:26.011 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:26.012 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:26.012 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:26.012 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:26.012 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:26.012 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:26.012 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:26.012 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:26.012 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:26.012 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:26.012 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:26.012 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:26.012 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:26.012 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:26.012 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:26.012 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:26.012 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:26.012 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:26.012 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:26.012 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:26.012 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:26.012 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:26.012 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:26.012 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:26.012 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:26.012 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:26.012 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:26.012 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:26.012 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:26.012 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:26.012 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:26.012 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:26.012 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:26.012 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:26.012 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:26.012 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:26.012 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:26.012 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:26.012 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:26.012 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:26.012 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:26.012 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:26.012 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:26.012 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:26.012 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:26.012 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:26.012 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:26.012 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:26.012 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:26.012 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:26.012 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:26.012 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:26.012 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:26.012 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:26.012 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:26.012 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:26.012 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:26.012 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:26.012 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:26.012 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:26.012 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:26.012 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:26.012 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:26.012 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:26.012 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:26.012 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:26.012 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:26.012 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:26.012 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:26.012 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:26.012 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:26.012 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:26.012 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:26.012 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:26.012 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:26.012 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:26.012 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:26.012 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:26.012 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:26.012 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:26.012 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:26.271 01:21:05 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:26.271 01:21:05 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.271 00:03:26.271 real 1m28.623s 00:03:26.271 user 18m4.122s 00:03:26.271 sys 2m9.666s 00:03:26.271 01:21:05 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:26.271 01:21:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:26.271 ************************************ 00:03:26.271 END TEST build_native_dpdk 00:03:26.271 ************************************ 00:03:26.271 01:21:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:26.271 01:21:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:26.271 01:21:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:26.271 01:21:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:26.271 01:21:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:26.271 01:21:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:26.271 01:21:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:26.271 01:21:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:26.271 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:26.271 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:26.271 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:26.271 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:26.530 Using 'verbs' RDMA provider 00:03:37.069 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:47.045 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:47.046 Creating mk/config.mk...done. 00:03:47.046 Creating mk/cc.flags.mk...done. 00:03:47.046 Type 'make' to build. 00:03:47.046 01:21:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:47.046 01:21:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:47.046 01:21:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:47.046 01:21:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.046 ************************************ 00:03:47.046 START TEST make 00:03:47.046 ************************************ 00:03:47.046 01:21:25 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:47.046 make[1]: Nothing to be done for 'all'. 00:03:47.989 The Meson build system 00:03:47.989 Version: 1.5.0 00:03:47.989 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:47.989 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:47.989 Build type: native build 00:03:47.989 Project name: libvfio-user 00:03:47.989 Project version: 0.0.1 00:03:47.989 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:47.989 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:47.989 Host machine cpu family: x86_64 00:03:47.989 Host machine cpu: x86_64 00:03:47.989 Run-time dependency threads found: YES 00:03:47.989 Library dl found: YES 00:03:47.989 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:47.989 Run-time dependency json-c found: YES 0.17 00:03:47.989 Run-time dependency cmocka found: YES 1.1.7 00:03:47.989 Program pytest-3 found: NO 00:03:47.989 Program flake8 found: NO 00:03:47.989 Program misspell-fixer found: NO 00:03:47.989 Program restructuredtext-lint found: NO 00:03:47.989 Program valgrind found: YES (/usr/bin/valgrind) 00:03:47.989 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:47.989 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:47.989 Compiler for C supports arguments -Wwrite-strings: YES 00:03:47.989 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:47.989 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:47.989 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:47.989 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:47.989 Build targets in project: 8 00:03:47.989 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:47.989 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:47.989 00:03:47.989 libvfio-user 0.0.1 00:03:47.989 00:03:47.989 User defined options 00:03:47.989 buildtype : debug 00:03:47.989 default_library: shared 00:03:47.989 libdir : /usr/local/lib 00:03:47.989 00:03:47.989 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:48.932 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:48.932 [1/37] Compiling C object samples/null.p/null.c.o 00:03:48.932 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:48.932 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:48.932 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:49.197 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:49.198 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:49.198 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:49.198 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:49.198 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:49.198 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:49.198 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:49.198 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:49.198 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:49.198 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:49.198 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:49.198 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:49.198 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:49.198 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:49.198 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:49.198 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:49.198 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:49.198 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:49.198 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:49.198 [24/37] Compiling C object samples/server.p/server.c.o 00:03:49.198 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:49.198 [26/37] Compiling C object samples/client.p/client.c.o 00:03:49.461 [27/37] Linking target samples/client 00:03:49.461 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:49.461 [29/37] Linking target test/unit_tests 00:03:49.461 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:49.461 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:49.724 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:49.724 [33/37] Linking target samples/server 00:03:49.724 [34/37] Linking target samples/null 00:03:49.724 [35/37] Linking target samples/lspci 00:03:49.724 [36/37] Linking target samples/gpio-pci-idio-16 00:03:49.724 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:49.724 INFO: autodetecting backend as ninja 00:03:49.724 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.985 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:50.559 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:50.559 ninja: no work to do. 00:04:29.259 CC lib/ut_mock/mock.o 00:04:29.259 CC lib/ut/ut.o 00:04:29.259 CC lib/log/log.o 00:04:29.259 CC lib/log/log_flags.o 00:04:29.259 CC lib/log/log_deprecated.o 00:04:29.259 LIB libspdk_ut.a 00:04:29.259 LIB libspdk_ut_mock.a 00:04:29.259 LIB libspdk_log.a 00:04:29.259 SO libspdk_ut_mock.so.6.0 00:04:29.259 SO libspdk_ut.so.2.0 00:04:29.259 SO libspdk_log.so.7.0 00:04:29.259 SYMLINK libspdk_ut.so 00:04:29.259 SYMLINK libspdk_ut_mock.so 00:04:29.259 SYMLINK libspdk_log.so 00:04:29.259 CC lib/dma/dma.o 00:04:29.259 CXX lib/trace_parser/trace.o 00:04:29.259 CC lib/ioat/ioat.o 00:04:29.259 CC lib/util/base64.o 00:04:29.259 CC lib/util/bit_array.o 00:04:29.259 CC lib/util/cpuset.o 00:04:29.259 CC lib/util/crc16.o 00:04:29.259 CC lib/util/crc32.o 00:04:29.259 CC lib/util/crc32c.o 00:04:29.259 CC lib/util/crc32_ieee.o 00:04:29.259 CC lib/util/crc64.o 00:04:29.259 CC lib/util/dif.o 00:04:29.259 CC lib/util/fd.o 00:04:29.259 CC lib/util/fd_group.o 00:04:29.259 CC lib/util/file.o 00:04:29.259 CC lib/util/hexlify.o 00:04:29.259 CC lib/util/iov.o 00:04:29.259 CC lib/util/math.o 00:04:29.259 CC lib/util/net.o 00:04:29.259 CC lib/util/pipe.o 00:04:29.259 CC lib/util/strerror_tls.o 00:04:29.259 CC lib/util/string.o 00:04:29.259 CC lib/util/uuid.o 00:04:29.259 CC lib/util/xor.o 00:04:29.259 CC lib/util/zipf.o 00:04:29.259 CC lib/util/md5.o 00:04:29.259 CC lib/vfio_user/host/vfio_user_pci.o 00:04:29.259 CC lib/vfio_user/host/vfio_user.o 00:04:29.259 LIB libspdk_dma.a 00:04:29.259 SO libspdk_dma.so.5.0 00:04:29.259 SYMLINK libspdk_dma.so 00:04:29.259 LIB libspdk_ioat.a 00:04:29.259 LIB libspdk_vfio_user.a 00:04:29.259 SO libspdk_ioat.so.7.0 00:04:29.259 SO libspdk_vfio_user.so.5.0 00:04:29.259 SYMLINK libspdk_ioat.so 00:04:29.259 SYMLINK libspdk_vfio_user.so 00:04:29.259 LIB libspdk_util.a 00:04:29.259 SO libspdk_util.so.10.0 00:04:29.259 SYMLINK libspdk_util.so 00:04:29.259 CC lib/json/json_parse.o 00:04:29.259 CC lib/conf/conf.o 00:04:29.259 CC lib/json/json_util.o 00:04:29.259 CC lib/idxd/idxd.o 00:04:29.259 CC lib/vmd/vmd.o 00:04:29.259 CC lib/env_dpdk/env.o 00:04:29.259 CC lib/json/json_write.o 00:04:29.259 CC lib/rdma_utils/rdma_utils.o 00:04:29.259 CC lib/rdma_provider/common.o 00:04:29.259 CC lib/idxd/idxd_user.o 00:04:29.259 CC lib/env_dpdk/memory.o 00:04:29.259 CC lib/vmd/led.o 00:04:29.259 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:29.259 CC lib/idxd/idxd_kernel.o 00:04:29.259 CC lib/env_dpdk/pci.o 00:04:29.259 CC lib/env_dpdk/init.o 00:04:29.259 CC lib/env_dpdk/threads.o 00:04:29.259 CC lib/env_dpdk/pci_ioat.o 00:04:29.259 CC lib/env_dpdk/pci_virtio.o 00:04:29.259 CC lib/env_dpdk/pci_vmd.o 00:04:29.259 CC lib/env_dpdk/pci_idxd.o 00:04:29.259 CC lib/env_dpdk/pci_event.o 00:04:29.259 CC lib/env_dpdk/sigbus_handler.o 00:04:29.259 CC lib/env_dpdk/pci_dpdk.o 00:04:29.259 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:29.259 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:29.259 LIB libspdk_trace_parser.a 00:04:29.259 SO libspdk_trace_parser.so.6.0 00:04:29.259 SYMLINK libspdk_trace_parser.so 00:04:29.259 LIB libspdk_conf.a 00:04:29.259 LIB libspdk_rdma_provider.a 00:04:29.259 SO libspdk_conf.so.6.0 00:04:29.259 SO libspdk_rdma_provider.so.6.0 00:04:29.259 LIB libspdk_rdma_utils.a 00:04:29.259 LIB libspdk_json.a 00:04:29.259 SO libspdk_rdma_utils.so.1.0 00:04:29.259 SYMLINK libspdk_conf.so 00:04:29.259 SYMLINK libspdk_rdma_provider.so 00:04:29.259 SO libspdk_json.so.6.0 00:04:29.259 SYMLINK libspdk_rdma_utils.so 00:04:29.259 SYMLINK libspdk_json.so 00:04:29.259 CC lib/jsonrpc/jsonrpc_server.o 00:04:29.259 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:29.259 CC lib/jsonrpc/jsonrpc_client.o 00:04:29.259 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:29.259 LIB libspdk_vmd.a 00:04:29.259 LIB libspdk_idxd.a 00:04:29.259 SO libspdk_vmd.so.6.0 00:04:29.259 SO libspdk_idxd.so.12.1 00:04:29.259 SYMLINK libspdk_vmd.so 00:04:29.259 SYMLINK libspdk_idxd.so 00:04:29.259 LIB libspdk_jsonrpc.a 00:04:29.259 SO libspdk_jsonrpc.so.6.0 00:04:29.259 SYMLINK libspdk_jsonrpc.so 00:04:29.259 CC lib/rpc/rpc.o 00:04:29.517 LIB libspdk_rpc.a 00:04:29.517 SO libspdk_rpc.so.6.0 00:04:29.517 SYMLINK libspdk_rpc.so 00:04:29.775 CC lib/notify/notify.o 00:04:29.775 CC lib/notify/notify_rpc.o 00:04:29.775 CC lib/keyring/keyring.o 00:04:29.775 CC lib/keyring/keyring_rpc.o 00:04:29.775 CC lib/trace/trace.o 00:04:29.775 CC lib/trace/trace_flags.o 00:04:29.775 CC lib/trace/trace_rpc.o 00:04:29.775 LIB libspdk_notify.a 00:04:29.775 SO libspdk_notify.so.6.0 00:04:30.033 SYMLINK libspdk_notify.so 00:04:30.033 LIB libspdk_keyring.a 00:04:30.033 SO libspdk_keyring.so.2.0 00:04:30.033 LIB libspdk_trace.a 00:04:30.033 SO libspdk_trace.so.11.0 00:04:30.033 SYMLINK libspdk_keyring.so 00:04:30.033 SYMLINK libspdk_trace.so 00:04:30.289 CC lib/sock/sock.o 00:04:30.289 CC lib/sock/sock_rpc.o 00:04:30.289 CC lib/thread/thread.o 00:04:30.289 CC lib/thread/iobuf.o 00:04:30.289 LIB libspdk_env_dpdk.a 00:04:30.289 SO libspdk_env_dpdk.so.15.0 00:04:30.547 SYMLINK libspdk_env_dpdk.so 00:04:30.547 LIB libspdk_sock.a 00:04:30.547 SO libspdk_sock.so.10.0 00:04:30.859 SYMLINK libspdk_sock.so 00:04:30.859 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:30.859 CC lib/nvme/nvme_ctrlr.o 00:04:30.859 CC lib/nvme/nvme_fabric.o 00:04:30.859 CC lib/nvme/nvme_ns_cmd.o 00:04:30.859 CC lib/nvme/nvme_ns.o 00:04:30.859 CC lib/nvme/nvme_pcie_common.o 00:04:30.859 CC lib/nvme/nvme_pcie.o 00:04:30.859 CC lib/nvme/nvme_qpair.o 00:04:30.859 CC lib/nvme/nvme.o 00:04:30.859 CC lib/nvme/nvme_quirks.o 00:04:30.859 CC lib/nvme/nvme_transport.o 00:04:30.859 CC lib/nvme/nvme_discovery.o 00:04:30.859 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:30.859 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:30.859 CC lib/nvme/nvme_tcp.o 00:04:30.859 CC lib/nvme/nvme_opal.o 00:04:30.859 CC lib/nvme/nvme_io_msg.o 00:04:30.859 CC lib/nvme/nvme_poll_group.o 00:04:30.859 CC lib/nvme/nvme_zns.o 00:04:30.859 CC lib/nvme/nvme_stubs.o 00:04:30.859 CC lib/nvme/nvme_auth.o 00:04:30.859 CC lib/nvme/nvme_cuse.o 00:04:30.859 CC lib/nvme/nvme_vfio_user.o 00:04:30.859 CC lib/nvme/nvme_rdma.o 00:04:31.829 LIB libspdk_thread.a 00:04:31.829 SO libspdk_thread.so.10.1 00:04:31.829 SYMLINK libspdk_thread.so 00:04:32.087 CC lib/fsdev/fsdev.o 00:04:32.087 CC lib/accel/accel.o 00:04:32.087 CC lib/init/json_config.o 00:04:32.087 CC lib/virtio/virtio.o 00:04:32.087 CC lib/fsdev/fsdev_io.o 00:04:32.087 CC lib/init/subsystem.o 00:04:32.087 CC lib/virtio/virtio_vhost_user.o 00:04:32.087 CC lib/accel/accel_rpc.o 00:04:32.087 CC lib/fsdev/fsdev_rpc.o 00:04:32.087 CC lib/virtio/virtio_vfio_user.o 00:04:32.087 CC lib/init/subsystem_rpc.o 00:04:32.087 CC lib/accel/accel_sw.o 00:04:32.087 CC lib/init/rpc.o 00:04:32.087 CC lib/virtio/virtio_pci.o 00:04:32.087 CC lib/vfu_tgt/tgt_endpoint.o 00:04:32.087 CC lib/blob/blobstore.o 00:04:32.087 CC lib/vfu_tgt/tgt_rpc.o 00:04:32.087 CC lib/blob/request.o 00:04:32.087 CC lib/blob/zeroes.o 00:04:32.087 CC lib/blob/blob_bs_dev.o 00:04:32.345 LIB libspdk_init.a 00:04:32.345 SO libspdk_init.so.6.0 00:04:32.345 SYMLINK libspdk_init.so 00:04:32.345 LIB libspdk_vfu_tgt.a 00:04:32.345 SO libspdk_vfu_tgt.so.3.0 00:04:32.345 LIB libspdk_virtio.a 00:04:32.602 SO libspdk_virtio.so.7.0 00:04:32.602 SYMLINK libspdk_vfu_tgt.so 00:04:32.602 SYMLINK libspdk_virtio.so 00:04:32.602 CC lib/event/app.o 00:04:32.602 CC lib/event/reactor.o 00:04:32.602 CC lib/event/log_rpc.o 00:04:32.602 CC lib/event/app_rpc.o 00:04:32.602 CC lib/event/scheduler_static.o 00:04:32.861 LIB libspdk_fsdev.a 00:04:32.861 SO libspdk_fsdev.so.1.0 00:04:32.861 SYMLINK libspdk_fsdev.so 00:04:33.118 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:33.118 LIB libspdk_event.a 00:04:33.118 SO libspdk_event.so.14.0 00:04:33.118 SYMLINK libspdk_event.so 00:04:33.376 LIB libspdk_accel.a 00:04:33.376 SO libspdk_accel.so.16.0 00:04:33.376 SYMLINK libspdk_accel.so 00:04:33.376 LIB libspdk_nvme.a 00:04:33.376 SO libspdk_nvme.so.14.0 00:04:33.633 CC lib/bdev/bdev.o 00:04:33.633 CC lib/bdev/bdev_rpc.o 00:04:33.633 CC lib/bdev/bdev_zone.o 00:04:33.633 CC lib/bdev/part.o 00:04:33.633 CC lib/bdev/scsi_nvme.o 00:04:33.633 LIB libspdk_fuse_dispatcher.a 00:04:33.633 SO libspdk_fuse_dispatcher.so.1.0 00:04:33.633 SYMLINK libspdk_nvme.so 00:04:33.633 SYMLINK libspdk_fuse_dispatcher.so 00:04:35.534 LIB libspdk_blob.a 00:04:35.534 SO libspdk_blob.so.11.0 00:04:35.534 SYMLINK libspdk_blob.so 00:04:35.534 CC lib/blobfs/blobfs.o 00:04:35.534 CC lib/blobfs/tree.o 00:04:35.534 CC lib/lvol/lvol.o 00:04:36.100 LIB libspdk_bdev.a 00:04:36.100 SO libspdk_bdev.so.16.0 00:04:36.365 SYMLINK libspdk_bdev.so 00:04:36.365 LIB libspdk_blobfs.a 00:04:36.365 SO libspdk_blobfs.so.10.0 00:04:36.365 SYMLINK libspdk_blobfs.so 00:04:36.365 LIB libspdk_lvol.a 00:04:36.365 SO libspdk_lvol.so.10.0 00:04:36.365 CC lib/scsi/dev.o 00:04:36.365 CC lib/nbd/nbd.o 00:04:36.365 CC lib/ublk/ublk.o 00:04:36.365 CC lib/nvmf/ctrlr.o 00:04:36.365 CC lib/ublk/ublk_rpc.o 00:04:36.365 CC lib/scsi/lun.o 00:04:36.365 CC lib/nvmf/ctrlr_discovery.o 00:04:36.365 CC lib/nbd/nbd_rpc.o 00:04:36.365 CC lib/scsi/port.o 00:04:36.365 CC lib/ftl/ftl_core.o 00:04:36.365 CC lib/nvmf/ctrlr_bdev.o 00:04:36.365 CC lib/scsi/scsi.o 00:04:36.365 CC lib/ftl/ftl_init.o 00:04:36.365 CC lib/nvmf/subsystem.o 00:04:36.365 CC lib/ftl/ftl_layout.o 00:04:36.365 CC lib/scsi/scsi_bdev.o 00:04:36.365 CC lib/scsi/scsi_pr.o 00:04:36.365 CC lib/nvmf/nvmf.o 00:04:36.365 CC lib/ftl/ftl_debug.o 00:04:36.365 CC lib/scsi/scsi_rpc.o 00:04:36.365 CC lib/nvmf/nvmf_rpc.o 00:04:36.365 CC lib/ftl/ftl_io.o 00:04:36.365 CC lib/nvmf/transport.o 00:04:36.365 CC lib/scsi/task.o 00:04:36.365 CC lib/ftl/ftl_l2p.o 00:04:36.365 CC lib/ftl/ftl_sb.o 00:04:36.365 CC lib/nvmf/tcp.o 00:04:36.365 CC lib/nvmf/stubs.o 00:04:36.365 CC lib/ftl/ftl_l2p_flat.o 00:04:36.365 CC lib/nvmf/mdns_server.o 00:04:36.365 CC lib/ftl/ftl_nv_cache.o 00:04:36.365 CC lib/nvmf/vfio_user.o 00:04:36.365 CC lib/ftl/ftl_band.o 00:04:36.365 CC lib/nvmf/rdma.o 00:04:36.365 CC lib/ftl/ftl_band_ops.o 00:04:36.365 CC lib/ftl/ftl_writer.o 00:04:36.365 CC lib/nvmf/auth.o 00:04:36.365 CC lib/ftl/ftl_rq.o 00:04:36.365 CC lib/ftl/ftl_reloc.o 00:04:36.365 CC lib/ftl/ftl_l2p_cache.o 00:04:36.365 CC lib/ftl/ftl_p2l.o 00:04:36.365 CC lib/ftl/ftl_p2l_log.o 00:04:36.365 CC lib/ftl/mngt/ftl_mngt.o 00:04:36.365 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:36.365 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:36.365 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:36.365 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:36.365 SYMLINK libspdk_lvol.so 00:04:36.365 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.942 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:36.942 CC lib/ftl/utils/ftl_conf.o 00:04:36.942 CC lib/ftl/utils/ftl_md.o 00:04:36.942 CC lib/ftl/utils/ftl_mempool.o 00:04:36.942 CC lib/ftl/utils/ftl_bitmap.o 00:04:36.942 CC lib/ftl/utils/ftl_property.o 00:04:36.942 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:36.942 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:36.942 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:36.942 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:36.942 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:36.942 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:36.942 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:36.942 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:37.202 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:37.202 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:37.202 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:37.202 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:37.202 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:37.202 CC lib/ftl/base/ftl_base_dev.o 00:04:37.202 CC lib/ftl/base/ftl_base_bdev.o 00:04:37.202 CC lib/ftl/ftl_trace.o 00:04:37.202 LIB libspdk_nbd.a 00:04:37.202 SO libspdk_nbd.so.7.0 00:04:37.460 LIB libspdk_scsi.a 00:04:37.460 SYMLINK libspdk_nbd.so 00:04:37.460 SO libspdk_scsi.so.9.0 00:04:37.460 SYMLINK libspdk_scsi.so 00:04:37.460 LIB libspdk_ublk.a 00:04:37.460 SO libspdk_ublk.so.3.0 00:04:37.719 CC lib/iscsi/conn.o 00:04:37.719 CC lib/vhost/vhost.o 00:04:37.719 CC lib/iscsi/init_grp.o 00:04:37.719 CC lib/vhost/vhost_rpc.o 00:04:37.719 CC lib/iscsi/iscsi.o 00:04:37.719 CC lib/vhost/vhost_scsi.o 00:04:37.719 CC lib/iscsi/param.o 00:04:37.719 CC lib/vhost/vhost_blk.o 00:04:37.719 CC lib/iscsi/portal_grp.o 00:04:37.719 CC lib/vhost/rte_vhost_user.o 00:04:37.719 CC lib/iscsi/tgt_node.o 00:04:37.719 CC lib/iscsi/iscsi_subsystem.o 00:04:37.719 CC lib/iscsi/iscsi_rpc.o 00:04:37.719 CC lib/iscsi/task.o 00:04:37.719 SYMLINK libspdk_ublk.so 00:04:37.977 LIB libspdk_ftl.a 00:04:37.977 SO libspdk_ftl.so.9.0 00:04:38.234 SYMLINK libspdk_ftl.so 00:04:38.800 LIB libspdk_vhost.a 00:04:38.800 SO libspdk_vhost.so.8.0 00:04:39.059 SYMLINK libspdk_vhost.so 00:04:39.059 LIB libspdk_nvmf.a 00:04:39.059 LIB libspdk_iscsi.a 00:04:39.059 SO libspdk_iscsi.so.8.0 00:04:39.059 SO libspdk_nvmf.so.19.0 00:04:39.324 SYMLINK libspdk_iscsi.so 00:04:39.324 SYMLINK libspdk_nvmf.so 00:04:39.582 CC module/env_dpdk/env_dpdk_rpc.o 00:04:39.582 CC module/vfu_device/vfu_virtio.o 00:04:39.582 CC module/vfu_device/vfu_virtio_blk.o 00:04:39.582 CC module/vfu_device/vfu_virtio_scsi.o 00:04:39.582 CC module/vfu_device/vfu_virtio_rpc.o 00:04:39.582 CC module/vfu_device/vfu_virtio_fs.o 00:04:39.582 CC module/accel/dsa/accel_dsa.o 00:04:39.582 CC module/accel/dsa/accel_dsa_rpc.o 00:04:39.582 CC module/accel/iaa/accel_iaa.o 00:04:39.582 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.582 CC module/accel/error/accel_error.o 00:04:39.582 CC module/accel/error/accel_error_rpc.o 00:04:39.582 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:39.582 CC module/accel/ioat/accel_ioat.o 00:04:39.582 CC module/accel/ioat/accel_ioat_rpc.o 00:04:39.582 CC module/sock/posix/posix.o 00:04:39.582 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:39.582 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.582 CC module/keyring/file/keyring.o 00:04:39.582 CC module/keyring/file/keyring_rpc.o 00:04:39.582 CC module/keyring/linux/keyring.o 00:04:39.582 CC module/keyring/linux/keyring_rpc.o 00:04:39.582 CC module/fsdev/aio/fsdev_aio.o 00:04:39.582 CC module/blob/bdev/blob_bdev.o 00:04:39.582 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:39.582 CC module/fsdev/aio/linux_aio_mgr.o 00:04:39.839 LIB libspdk_env_dpdk_rpc.a 00:04:39.839 SO libspdk_env_dpdk_rpc.so.6.0 00:04:39.839 SYMLINK libspdk_env_dpdk_rpc.so 00:04:39.839 LIB libspdk_keyring_file.a 00:04:39.839 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.839 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.839 SO libspdk_keyring_file.so.2.0 00:04:39.839 LIB libspdk_accel_error.a 00:04:39.839 LIB libspdk_accel_ioat.a 00:04:39.839 LIB libspdk_scheduler_dynamic.a 00:04:39.839 SO libspdk_accel_error.so.2.0 00:04:39.839 LIB libspdk_keyring_linux.a 00:04:39.839 LIB libspdk_accel_iaa.a 00:04:39.839 SO libspdk_accel_ioat.so.6.0 00:04:40.096 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:40.096 SO libspdk_scheduler_dynamic.so.4.0 00:04:40.096 SYMLINK libspdk_keyring_file.so 00:04:40.096 SO libspdk_keyring_linux.so.1.0 00:04:40.096 SO libspdk_accel_iaa.so.3.0 00:04:40.096 LIB libspdk_scheduler_gscheduler.a 00:04:40.096 SYMLINK libspdk_accel_error.so 00:04:40.096 SO libspdk_scheduler_gscheduler.so.4.0 00:04:40.096 SYMLINK libspdk_accel_ioat.so 00:04:40.096 SYMLINK libspdk_scheduler_dynamic.so 00:04:40.096 SYMLINK libspdk_keyring_linux.so 00:04:40.097 LIB libspdk_blob_bdev.a 00:04:40.097 SYMLINK libspdk_accel_iaa.so 00:04:40.097 SO libspdk_blob_bdev.so.11.0 00:04:40.097 SYMLINK libspdk_scheduler_gscheduler.so 00:04:40.097 SYMLINK libspdk_blob_bdev.so 00:04:40.097 LIB libspdk_accel_dsa.a 00:04:40.097 SO libspdk_accel_dsa.so.5.0 00:04:40.097 SYMLINK libspdk_accel_dsa.so 00:04:40.355 LIB libspdk_vfu_device.a 00:04:40.355 CC module/bdev/lvol/vbdev_lvol.o 00:04:40.355 CC module/bdev/gpt/gpt.o 00:04:40.355 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:40.355 CC module/bdev/gpt/vbdev_gpt.o 00:04:40.355 CC module/bdev/split/vbdev_split.o 00:04:40.355 CC module/bdev/malloc/bdev_malloc.o 00:04:40.355 CC module/bdev/error/vbdev_error.o 00:04:40.355 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.355 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.355 CC module/blobfs/bdev/blobfs_bdev.o 00:04:40.355 CC module/bdev/error/vbdev_error_rpc.o 00:04:40.355 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:40.355 CC module/bdev/passthru/vbdev_passthru.o 00:04:40.355 CC module/bdev/null/bdev_null.o 00:04:40.355 CC module/bdev/raid/bdev_raid.o 00:04:40.355 CC module/bdev/delay/vbdev_delay.o 00:04:40.355 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:40.355 CC module/bdev/null/bdev_null_rpc.o 00:04:40.355 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.355 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:40.355 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.355 CC module/bdev/raid/raid0.o 00:04:40.355 CC module/bdev/raid/raid1.o 00:04:40.355 CC module/bdev/nvme/bdev_nvme.o 00:04:40.355 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:40.355 CC module/bdev/raid/concat.o 00:04:40.355 CC module/bdev/nvme/nvme_rpc.o 00:04:40.355 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.355 CC module/bdev/nvme/vbdev_opal.o 00:04:40.355 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.355 CC module/bdev/ftl/bdev_ftl.o 00:04:40.355 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.355 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.355 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.355 CC module/bdev/aio/bdev_aio.o 00:04:40.355 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.355 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.355 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.355 SO libspdk_vfu_device.so.3.0 00:04:40.355 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.355 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.355 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.355 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.613 SYMLINK libspdk_vfu_device.so 00:04:40.613 LIB libspdk_sock_posix.a 00:04:40.613 LIB libspdk_fsdev_aio.a 00:04:40.613 SO libspdk_sock_posix.so.6.0 00:04:40.613 SO libspdk_fsdev_aio.so.1.0 00:04:40.613 SYMLINK libspdk_fsdev_aio.so 00:04:40.871 SYMLINK libspdk_sock_posix.so 00:04:40.871 LIB libspdk_blobfs_bdev.a 00:04:40.871 SO libspdk_blobfs_bdev.so.6.0 00:04:40.871 LIB libspdk_bdev_split.a 00:04:40.871 SYMLINK libspdk_blobfs_bdev.so 00:04:40.871 LIB libspdk_bdev_gpt.a 00:04:40.871 SO libspdk_bdev_split.so.6.0 00:04:40.871 SO libspdk_bdev_gpt.so.6.0 00:04:40.871 LIB libspdk_bdev_error.a 00:04:40.871 LIB libspdk_bdev_passthru.a 00:04:40.871 LIB libspdk_bdev_null.a 00:04:40.871 SYMLINK libspdk_bdev_split.so 00:04:40.871 SO libspdk_bdev_error.so.6.0 00:04:40.871 SO libspdk_bdev_passthru.so.6.0 00:04:40.871 SO libspdk_bdev_null.so.6.0 00:04:40.871 LIB libspdk_bdev_ftl.a 00:04:40.871 SYMLINK libspdk_bdev_gpt.so 00:04:40.871 SO libspdk_bdev_ftl.so.6.0 00:04:40.871 SYMLINK libspdk_bdev_error.so 00:04:40.871 SYMLINK libspdk_bdev_passthru.so 00:04:40.871 SYMLINK libspdk_bdev_null.so 00:04:40.871 LIB libspdk_bdev_aio.a 00:04:40.871 LIB libspdk_bdev_iscsi.a 00:04:40.871 LIB libspdk_bdev_zone_block.a 00:04:41.129 SO libspdk_bdev_aio.so.6.0 00:04:41.129 LIB libspdk_bdev_malloc.a 00:04:41.129 SO libspdk_bdev_iscsi.so.6.0 00:04:41.129 SYMLINK libspdk_bdev_ftl.so 00:04:41.129 LIB libspdk_bdev_delay.a 00:04:41.129 SO libspdk_bdev_zone_block.so.6.0 00:04:41.129 SO libspdk_bdev_malloc.so.6.0 00:04:41.129 SO libspdk_bdev_delay.so.6.0 00:04:41.129 SYMLINK libspdk_bdev_iscsi.so 00:04:41.129 SYMLINK libspdk_bdev_aio.so 00:04:41.129 SYMLINK libspdk_bdev_zone_block.so 00:04:41.129 SYMLINK libspdk_bdev_malloc.so 00:04:41.129 SYMLINK libspdk_bdev_delay.so 00:04:41.129 LIB libspdk_bdev_lvol.a 00:04:41.129 LIB libspdk_bdev_virtio.a 00:04:41.129 SO libspdk_bdev_lvol.so.6.0 00:04:41.129 SO libspdk_bdev_virtio.so.6.0 00:04:41.129 SYMLINK libspdk_bdev_lvol.so 00:04:41.129 SYMLINK libspdk_bdev_virtio.so 00:04:41.695 LIB libspdk_bdev_raid.a 00:04:41.695 SO libspdk_bdev_raid.so.6.0 00:04:41.695 SYMLINK libspdk_bdev_raid.so 00:04:43.069 LIB libspdk_bdev_nvme.a 00:04:43.069 SO libspdk_bdev_nvme.so.7.0 00:04:43.069 SYMLINK libspdk_bdev_nvme.so 00:04:43.327 CC module/event/subsystems/sock/sock.o 00:04:43.327 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:43.327 CC module/event/subsystems/scheduler/scheduler.o 00:04:43.327 CC module/event/subsystems/keyring/keyring.o 00:04:43.327 CC module/event/subsystems/iobuf/iobuf.o 00:04:43.327 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:43.327 CC module/event/subsystems/fsdev/fsdev.o 00:04:43.327 CC module/event/subsystems/vmd/vmd.o 00:04:43.327 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:43.327 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:43.586 LIB libspdk_event_keyring.a 00:04:43.586 LIB libspdk_event_vhost_blk.a 00:04:43.586 LIB libspdk_event_fsdev.a 00:04:43.586 LIB libspdk_event_scheduler.a 00:04:43.586 LIB libspdk_event_vfu_tgt.a 00:04:43.586 LIB libspdk_event_vmd.a 00:04:43.586 LIB libspdk_event_sock.a 00:04:43.586 SO libspdk_event_keyring.so.1.0 00:04:43.586 LIB libspdk_event_iobuf.a 00:04:43.586 SO libspdk_event_fsdev.so.1.0 00:04:43.586 SO libspdk_event_vhost_blk.so.3.0 00:04:43.586 SO libspdk_event_scheduler.so.4.0 00:04:43.586 SO libspdk_event_vfu_tgt.so.3.0 00:04:43.586 SO libspdk_event_sock.so.5.0 00:04:43.586 SO libspdk_event_vmd.so.6.0 00:04:43.586 SO libspdk_event_iobuf.so.3.0 00:04:43.586 SYMLINK libspdk_event_keyring.so 00:04:43.586 SYMLINK libspdk_event_vhost_blk.so 00:04:43.586 SYMLINK libspdk_event_fsdev.so 00:04:43.586 SYMLINK libspdk_event_scheduler.so 00:04:43.586 SYMLINK libspdk_event_vfu_tgt.so 00:04:43.586 SYMLINK libspdk_event_sock.so 00:04:43.586 SYMLINK libspdk_event_vmd.so 00:04:43.586 SYMLINK libspdk_event_iobuf.so 00:04:43.844 CC module/event/subsystems/accel/accel.o 00:04:43.844 LIB libspdk_event_accel.a 00:04:43.844 SO libspdk_event_accel.so.6.0 00:04:44.101 SYMLINK libspdk_event_accel.so 00:04:44.101 CC module/event/subsystems/bdev/bdev.o 00:04:44.360 LIB libspdk_event_bdev.a 00:04:44.360 SO libspdk_event_bdev.so.6.0 00:04:44.360 SYMLINK libspdk_event_bdev.so 00:04:44.618 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.618 CC module/event/subsystems/scsi/scsi.o 00:04:44.618 CC module/event/subsystems/nbd/nbd.o 00:04:44.618 CC module/event/subsystems/ublk/ublk.o 00:04:44.618 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.618 LIB libspdk_event_nbd.a 00:04:44.618 LIB libspdk_event_ublk.a 00:04:44.618 LIB libspdk_event_scsi.a 00:04:44.876 SO libspdk_event_nbd.so.6.0 00:04:44.876 SO libspdk_event_ublk.so.3.0 00:04:44.876 SO libspdk_event_scsi.so.6.0 00:04:44.876 SYMLINK libspdk_event_nbd.so 00:04:44.876 SYMLINK libspdk_event_ublk.so 00:04:44.876 SYMLINK libspdk_event_scsi.so 00:04:44.876 LIB libspdk_event_nvmf.a 00:04:44.876 SO libspdk_event_nvmf.so.6.0 00:04:44.876 SYMLINK libspdk_event_nvmf.so 00:04:44.876 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.876 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:45.134 LIB libspdk_event_vhost_scsi.a 00:04:45.134 LIB libspdk_event_iscsi.a 00:04:45.134 SO libspdk_event_vhost_scsi.so.3.0 00:04:45.134 SO libspdk_event_iscsi.so.6.0 00:04:45.134 SYMLINK libspdk_event_vhost_scsi.so 00:04:45.134 SYMLINK libspdk_event_iscsi.so 00:04:45.393 SO libspdk.so.6.0 00:04:45.393 SYMLINK libspdk.so 00:04:45.393 CXX app/trace/trace.o 00:04:45.393 CC app/trace_record/trace_record.o 00:04:45.393 CC app/spdk_nvme_identify/identify.o 00:04:45.393 CC app/spdk_top/spdk_top.o 00:04:45.393 CC test/rpc_client/rpc_client_test.o 00:04:45.393 CC app/spdk_nvme_perf/perf.o 00:04:45.393 CC app/spdk_nvme_discover/discovery_aer.o 00:04:45.393 TEST_HEADER include/spdk/accel.h 00:04:45.393 TEST_HEADER include/spdk/accel_module.h 00:04:45.393 TEST_HEADER include/spdk/assert.h 00:04:45.393 TEST_HEADER include/spdk/barrier.h 00:04:45.393 TEST_HEADER include/spdk/base64.h 00:04:45.393 CC app/spdk_lspci/spdk_lspci.o 00:04:45.393 TEST_HEADER include/spdk/bdev.h 00:04:45.393 TEST_HEADER include/spdk/bdev_module.h 00:04:45.393 TEST_HEADER include/spdk/bdev_zone.h 00:04:45.393 TEST_HEADER include/spdk/bit_array.h 00:04:45.393 TEST_HEADER include/spdk/bit_pool.h 00:04:45.393 TEST_HEADER include/spdk/blob_bdev.h 00:04:45.393 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:45.393 TEST_HEADER include/spdk/blobfs.h 00:04:45.393 TEST_HEADER include/spdk/blob.h 00:04:45.393 TEST_HEADER include/spdk/conf.h 00:04:45.393 TEST_HEADER include/spdk/config.h 00:04:45.393 TEST_HEADER include/spdk/cpuset.h 00:04:45.655 TEST_HEADER include/spdk/crc16.h 00:04:45.655 TEST_HEADER include/spdk/crc32.h 00:04:45.655 TEST_HEADER include/spdk/crc64.h 00:04:45.655 TEST_HEADER include/spdk/dif.h 00:04:45.655 TEST_HEADER include/spdk/dma.h 00:04:45.655 TEST_HEADER include/spdk/endian.h 00:04:45.655 TEST_HEADER include/spdk/env_dpdk.h 00:04:45.655 TEST_HEADER include/spdk/env.h 00:04:45.655 TEST_HEADER include/spdk/event.h 00:04:45.655 TEST_HEADER include/spdk/fd_group.h 00:04:45.655 TEST_HEADER include/spdk/fd.h 00:04:45.655 TEST_HEADER include/spdk/file.h 00:04:45.655 TEST_HEADER include/spdk/fsdev.h 00:04:45.655 TEST_HEADER include/spdk/fsdev_module.h 00:04:45.655 TEST_HEADER include/spdk/ftl.h 00:04:45.655 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:45.655 TEST_HEADER include/spdk/gpt_spec.h 00:04:45.655 TEST_HEADER include/spdk/hexlify.h 00:04:45.655 TEST_HEADER include/spdk/histogram_data.h 00:04:45.655 TEST_HEADER include/spdk/idxd.h 00:04:45.655 TEST_HEADER include/spdk/idxd_spec.h 00:04:45.655 TEST_HEADER include/spdk/ioat.h 00:04:45.655 TEST_HEADER include/spdk/init.h 00:04:45.655 TEST_HEADER include/spdk/ioat_spec.h 00:04:45.655 TEST_HEADER include/spdk/iscsi_spec.h 00:04:45.655 TEST_HEADER include/spdk/json.h 00:04:45.655 TEST_HEADER include/spdk/jsonrpc.h 00:04:45.655 TEST_HEADER include/spdk/keyring.h 00:04:45.655 TEST_HEADER include/spdk/keyring_module.h 00:04:45.655 TEST_HEADER include/spdk/likely.h 00:04:45.655 TEST_HEADER include/spdk/log.h 00:04:45.655 TEST_HEADER include/spdk/lvol.h 00:04:45.655 TEST_HEADER include/spdk/md5.h 00:04:45.655 TEST_HEADER include/spdk/memory.h 00:04:45.655 TEST_HEADER include/spdk/mmio.h 00:04:45.655 TEST_HEADER include/spdk/nbd.h 00:04:45.655 TEST_HEADER include/spdk/notify.h 00:04:45.655 TEST_HEADER include/spdk/net.h 00:04:45.655 TEST_HEADER include/spdk/nvme.h 00:04:45.655 TEST_HEADER include/spdk/nvme_intel.h 00:04:45.655 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:45.655 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:45.655 TEST_HEADER include/spdk/nvme_spec.h 00:04:45.655 TEST_HEADER include/spdk/nvme_zns.h 00:04:45.655 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:45.655 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:45.655 TEST_HEADER include/spdk/nvmf.h 00:04:45.655 TEST_HEADER include/spdk/nvmf_spec.h 00:04:45.655 TEST_HEADER include/spdk/nvmf_transport.h 00:04:45.655 TEST_HEADER include/spdk/opal.h 00:04:45.655 TEST_HEADER include/spdk/opal_spec.h 00:04:45.655 TEST_HEADER include/spdk/pipe.h 00:04:45.655 TEST_HEADER include/spdk/pci_ids.h 00:04:45.655 TEST_HEADER include/spdk/reduce.h 00:04:45.655 TEST_HEADER include/spdk/queue.h 00:04:45.655 TEST_HEADER include/spdk/rpc.h 00:04:45.655 TEST_HEADER include/spdk/scheduler.h 00:04:45.655 TEST_HEADER include/spdk/scsi.h 00:04:45.655 TEST_HEADER include/spdk/scsi_spec.h 00:04:45.655 TEST_HEADER include/spdk/sock.h 00:04:45.655 TEST_HEADER include/spdk/string.h 00:04:45.655 TEST_HEADER include/spdk/stdinc.h 00:04:45.655 TEST_HEADER include/spdk/thread.h 00:04:45.655 TEST_HEADER include/spdk/trace.h 00:04:45.655 TEST_HEADER include/spdk/trace_parser.h 00:04:45.655 TEST_HEADER include/spdk/tree.h 00:04:45.655 TEST_HEADER include/spdk/ublk.h 00:04:45.655 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:45.655 TEST_HEADER include/spdk/util.h 00:04:45.655 TEST_HEADER include/spdk/uuid.h 00:04:45.655 TEST_HEADER include/spdk/version.h 00:04:45.655 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:45.655 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:45.655 TEST_HEADER include/spdk/vhost.h 00:04:45.655 CC app/spdk_dd/spdk_dd.o 00:04:45.655 TEST_HEADER include/spdk/vmd.h 00:04:45.655 TEST_HEADER include/spdk/xor.h 00:04:45.655 TEST_HEADER include/spdk/zipf.h 00:04:45.655 CXX test/cpp_headers/accel.o 00:04:45.655 CXX test/cpp_headers/accel_module.o 00:04:45.655 CXX test/cpp_headers/assert.o 00:04:45.655 CXX test/cpp_headers/barrier.o 00:04:45.655 CXX test/cpp_headers/base64.o 00:04:45.655 CXX test/cpp_headers/bdev.o 00:04:45.655 CXX test/cpp_headers/bdev_module.o 00:04:45.655 CXX test/cpp_headers/bdev_zone.o 00:04:45.655 CXX test/cpp_headers/bit_array.o 00:04:45.655 CXX test/cpp_headers/bit_pool.o 00:04:45.655 CXX test/cpp_headers/blob_bdev.o 00:04:45.655 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.655 CXX test/cpp_headers/blobfs.o 00:04:45.655 CXX test/cpp_headers/blob.o 00:04:45.655 CXX test/cpp_headers/conf.o 00:04:45.655 CXX test/cpp_headers/config.o 00:04:45.655 CC app/nvmf_tgt/nvmf_main.o 00:04:45.655 CXX test/cpp_headers/cpuset.o 00:04:45.655 CXX test/cpp_headers/crc16.o 00:04:45.655 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.655 CXX test/cpp_headers/crc32.o 00:04:45.655 CC examples/ioat/perf/perf.o 00:04:45.655 CC app/spdk_tgt/spdk_tgt.o 00:04:45.655 CC test/app/stub/stub.o 00:04:45.655 CC examples/ioat/verify/verify.o 00:04:45.655 CC test/env/vtophys/vtophys.o 00:04:45.656 CC test/app/histogram_perf/histogram_perf.o 00:04:45.656 CC app/fio/nvme/fio_plugin.o 00:04:45.656 CC test/thread/poller_perf/poller_perf.o 00:04:45.656 CC test/env/memory/memory_ut.o 00:04:45.656 CC test/app/jsoncat/jsoncat.o 00:04:45.656 CC examples/util/zipf/zipf.o 00:04:45.656 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:45.656 CC test/env/pci/pci_ut.o 00:04:45.656 CC test/dma/test_dma/test_dma.o 00:04:45.656 CC app/fio/bdev/fio_plugin.o 00:04:45.656 CC test/app/bdev_svc/bdev_svc.o 00:04:45.919 CC test/env/mem_callbacks/mem_callbacks.o 00:04:45.919 LINK spdk_lspci 00:04:45.919 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.919 LINK rpc_client_test 00:04:45.919 LINK spdk_nvme_discover 00:04:45.919 LINK vtophys 00:04:45.919 LINK jsoncat 00:04:45.919 LINK interrupt_tgt 00:04:45.919 LINK nvmf_tgt 00:04:45.919 LINK histogram_perf 00:04:45.919 CXX test/cpp_headers/crc64.o 00:04:45.919 LINK poller_perf 00:04:45.919 CXX test/cpp_headers/dif.o 00:04:45.919 LINK zipf 00:04:45.919 CXX test/cpp_headers/dma.o 00:04:46.182 LINK env_dpdk_post_init 00:04:46.182 LINK spdk_trace_record 00:04:46.182 CXX test/cpp_headers/endian.o 00:04:46.182 CXX test/cpp_headers/env_dpdk.o 00:04:46.183 CXX test/cpp_headers/env.o 00:04:46.183 CXX test/cpp_headers/event.o 00:04:46.183 CXX test/cpp_headers/fd_group.o 00:04:46.183 CXX test/cpp_headers/fd.o 00:04:46.183 CXX test/cpp_headers/file.o 00:04:46.183 LINK stub 00:04:46.183 CXX test/cpp_headers/fsdev.o 00:04:46.183 CXX test/cpp_headers/fsdev_module.o 00:04:46.183 CXX test/cpp_headers/ftl.o 00:04:46.183 LINK verify 00:04:46.183 CXX test/cpp_headers/fuse_dispatcher.o 00:04:46.183 LINK iscsi_tgt 00:04:46.183 CXX test/cpp_headers/gpt_spec.o 00:04:46.183 LINK spdk_tgt 00:04:46.183 CXX test/cpp_headers/hexlify.o 00:04:46.183 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:46.183 LINK ioat_perf 00:04:46.183 LINK bdev_svc 00:04:46.183 CXX test/cpp_headers/histogram_data.o 00:04:46.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:46.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:46.183 CXX test/cpp_headers/idxd.o 00:04:46.183 CXX test/cpp_headers/idxd_spec.o 00:04:46.445 CXX test/cpp_headers/init.o 00:04:46.445 CXX test/cpp_headers/ioat.o 00:04:46.445 CXX test/cpp_headers/ioat_spec.o 00:04:46.445 CXX test/cpp_headers/iscsi_spec.o 00:04:46.445 CXX test/cpp_headers/json.o 00:04:46.445 LINK spdk_dd 00:04:46.445 LINK spdk_trace 00:04:46.445 CXX test/cpp_headers/jsonrpc.o 00:04:46.445 CXX test/cpp_headers/keyring.o 00:04:46.445 CXX test/cpp_headers/keyring_module.o 00:04:46.445 CXX test/cpp_headers/likely.o 00:04:46.445 CXX test/cpp_headers/log.o 00:04:46.445 CXX test/cpp_headers/lvol.o 00:04:46.445 LINK pci_ut 00:04:46.445 CXX test/cpp_headers/md5.o 00:04:46.445 CXX test/cpp_headers/memory.o 00:04:46.445 CXX test/cpp_headers/mmio.o 00:04:46.445 CXX test/cpp_headers/nbd.o 00:04:46.445 CXX test/cpp_headers/net.o 00:04:46.445 CXX test/cpp_headers/notify.o 00:04:46.445 CXX test/cpp_headers/nvme.o 00:04:46.445 CXX test/cpp_headers/nvme_intel.o 00:04:46.445 CXX test/cpp_headers/nvme_ocssd.o 00:04:46.445 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:46.445 CXX test/cpp_headers/nvme_spec.o 00:04:46.445 CXX test/cpp_headers/nvme_zns.o 00:04:46.445 CXX test/cpp_headers/nvmf_cmd.o 00:04:46.708 LINK nvme_fuzz 00:04:46.708 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:46.708 CC test/event/event_perf/event_perf.o 00:04:46.708 CXX test/cpp_headers/nvmf.o 00:04:46.708 CXX test/cpp_headers/nvmf_spec.o 00:04:46.708 CC test/event/reactor_perf/reactor_perf.o 00:04:46.708 CC test/event/reactor/reactor.o 00:04:46.708 CXX test/cpp_headers/nvmf_transport.o 00:04:46.708 CXX test/cpp_headers/opal.o 00:04:46.708 LINK spdk_bdev 00:04:46.708 CC test/event/app_repeat/app_repeat.o 00:04:46.708 CXX test/cpp_headers/opal_spec.o 00:04:46.708 CC examples/sock/hello_world/hello_sock.o 00:04:46.708 CXX test/cpp_headers/pci_ids.o 00:04:46.708 LINK test_dma 00:04:46.708 LINK spdk_nvme 00:04:46.708 CXX test/cpp_headers/pipe.o 00:04:46.708 CC examples/thread/thread/thread_ex.o 00:04:46.708 CC examples/vmd/lsvmd/lsvmd.o 00:04:46.708 CXX test/cpp_headers/queue.o 00:04:46.708 CC examples/idxd/perf/perf.o 00:04:46.708 CC test/event/scheduler/scheduler.o 00:04:46.708 CC examples/vmd/led/led.o 00:04:46.708 CXX test/cpp_headers/reduce.o 00:04:46.971 CXX test/cpp_headers/rpc.o 00:04:46.971 CXX test/cpp_headers/scheduler.o 00:04:46.971 CXX test/cpp_headers/scsi.o 00:04:46.971 CXX test/cpp_headers/scsi_spec.o 00:04:46.971 CXX test/cpp_headers/sock.o 00:04:46.971 CXX test/cpp_headers/stdinc.o 00:04:46.971 CXX test/cpp_headers/string.o 00:04:46.971 CXX test/cpp_headers/thread.o 00:04:46.971 CXX test/cpp_headers/trace.o 00:04:46.971 CXX test/cpp_headers/trace_parser.o 00:04:46.971 CXX test/cpp_headers/tree.o 00:04:46.971 CXX test/cpp_headers/ublk.o 00:04:46.971 CXX test/cpp_headers/util.o 00:04:46.971 CC app/vhost/vhost.o 00:04:46.971 CXX test/cpp_headers/uuid.o 00:04:46.971 CXX test/cpp_headers/version.o 00:04:46.971 CXX test/cpp_headers/vfio_user_pci.o 00:04:46.971 CXX test/cpp_headers/vfio_user_spec.o 00:04:46.971 CXX test/cpp_headers/vhost.o 00:04:46.971 CXX test/cpp_headers/vmd.o 00:04:46.971 CXX test/cpp_headers/xor.o 00:04:46.971 LINK mem_callbacks 00:04:46.971 LINK event_perf 00:04:46.971 LINK reactor 00:04:46.971 LINK vhost_fuzz 00:04:46.971 LINK reactor_perf 00:04:46.971 CXX test/cpp_headers/zipf.o 00:04:46.971 LINK app_repeat 00:04:47.230 LINK lsvmd 00:04:47.230 LINK spdk_nvme_perf 00:04:47.230 LINK spdk_nvme_identify 00:04:47.230 LINK led 00:04:47.230 LINK spdk_top 00:04:47.230 LINK hello_sock 00:04:47.230 LINK thread 00:04:47.230 LINK scheduler 00:04:47.230 LINK vhost 00:04:47.490 CC test/nvme/sgl/sgl.o 00:04:47.490 CC test/nvme/overhead/overhead.o 00:04:47.490 CC test/nvme/e2edp/nvme_dp.o 00:04:47.490 CC test/nvme/reserve/reserve.o 00:04:47.490 CC test/nvme/reset/reset.o 00:04:47.490 CC test/nvme/aer/aer.o 00:04:47.490 CC test/nvme/simple_copy/simple_copy.o 00:04:47.490 CC test/nvme/startup/startup.o 00:04:47.490 CC test/nvme/connect_stress/connect_stress.o 00:04:47.490 CC test/nvme/fdp/fdp.o 00:04:47.490 CC test/nvme/boot_partition/boot_partition.o 00:04:47.490 CC test/nvme/fused_ordering/fused_ordering.o 00:04:47.490 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:47.490 CC test/nvme/compliance/nvme_compliance.o 00:04:47.490 CC test/nvme/err_injection/err_injection.o 00:04:47.490 CC test/nvme/cuse/cuse.o 00:04:47.490 LINK idxd_perf 00:04:47.490 CC test/blobfs/mkfs/mkfs.o 00:04:47.490 CC test/accel/dif/dif.o 00:04:47.490 CC test/lvol/esnap/esnap.o 00:04:47.749 LINK startup 00:04:47.749 CC examples/nvme/arbitration/arbitration.o 00:04:47.749 LINK err_injection 00:04:47.749 CC examples/nvme/hotplug/hotplug.o 00:04:47.749 LINK connect_stress 00:04:47.749 CC examples/nvme/reconnect/reconnect.o 00:04:47.749 CC examples/nvme/hello_world/hello_world.o 00:04:47.749 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:47.749 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:47.749 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:47.749 CC examples/nvme/abort/abort.o 00:04:47.749 LINK boot_partition 00:04:47.749 LINK fused_ordering 00:04:47.749 LINK sgl 00:04:47.749 LINK mkfs 00:04:47.749 LINK simple_copy 00:04:47.749 LINK nvme_dp 00:04:47.749 LINK reset 00:04:47.749 LINK memory_ut 00:04:47.749 LINK doorbell_aers 00:04:47.749 LINK aer 00:04:47.749 LINK reserve 00:04:47.749 LINK nvme_compliance 00:04:47.749 CC examples/accel/perf/accel_perf.o 00:04:48.008 LINK fdp 00:04:48.008 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:48.008 CC examples/blob/cli/blobcli.o 00:04:48.008 CC examples/blob/hello_world/hello_blob.o 00:04:48.008 LINK overhead 00:04:48.008 LINK pmr_persistence 00:04:48.008 LINK cmb_copy 00:04:48.008 LINK hotplug 00:04:48.008 LINK hello_world 00:04:48.267 LINK arbitration 00:04:48.267 LINK abort 00:04:48.267 LINK reconnect 00:04:48.267 LINK hello_fsdev 00:04:48.267 LINK dif 00:04:48.267 LINK hello_blob 00:04:48.526 LINK accel_perf 00:04:48.526 LINK nvme_manage 00:04:48.526 LINK blobcli 00:04:48.526 LINK iscsi_fuzz 00:04:48.526 CC test/bdev/bdevio/bdevio.o 00:04:48.784 CC examples/bdev/hello_world/hello_bdev.o 00:04:48.784 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.042 LINK hello_bdev 00:04:49.042 LINK bdevio 00:04:49.042 LINK cuse 00:04:49.608 LINK bdevperf 00:04:49.867 CC examples/nvmf/nvmf/nvmf.o 00:04:50.434 LINK nvmf 00:04:52.969 LINK esnap 00:04:52.969 00:04:52.969 real 1m7.103s 00:04:52.969 user 9m3.544s 00:04:52.969 sys 1m55.509s 00:04:52.969 01:22:32 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:52.969 01:22:32 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.969 ************************************ 00:04:52.969 END TEST make 00:04:52.969 ************************************ 00:04:52.969 01:22:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.969 01:22:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.969 01:22:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.969 01:22:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.969 01:22:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.969 01:22:32 -- pm/common@44 -- $ pid=667255 00:04:52.969 01:22:32 -- pm/common@50 -- $ kill -TERM 667255 00:04:52.969 01:22:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.969 01:22:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.969 01:22:32 -- pm/common@44 -- $ pid=667257 00:04:52.969 01:22:32 -- pm/common@50 -- $ kill -TERM 667257 00:04:52.969 01:22:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.969 01:22:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:52.969 01:22:32 -- pm/common@44 -- $ pid=667259 00:04:52.969 01:22:32 -- pm/common@50 -- $ kill -TERM 667259 00:04:52.969 01:22:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.969 01:22:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:52.969 01:22:32 -- pm/common@44 -- $ pid=667287 00:04:52.969 01:22:32 -- pm/common@50 -- $ sudo -E kill -TERM 667287 00:04:53.228 01:22:32 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.228 01:22:32 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.228 01:22:32 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.228 01:22:32 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.228 01:22:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.228 01:22:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.228 01:22:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.228 01:22:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.228 01:22:32 -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.228 01:22:32 -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.228 01:22:32 -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.228 01:22:32 -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.228 01:22:32 -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.228 01:22:32 -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.228 01:22:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.228 01:22:32 -- scripts/common.sh@344 -- # case "$op" in 00:04:53.228 01:22:32 -- scripts/common.sh@345 -- # : 1 00:04:53.228 01:22:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.228 01:22:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.228 01:22:32 -- scripts/common.sh@365 -- # decimal 1 00:04:53.228 01:22:32 -- scripts/common.sh@353 -- # local d=1 00:04:53.228 01:22:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.228 01:22:32 -- scripts/common.sh@355 -- # echo 1 00:04:53.228 01:22:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.228 01:22:32 -- scripts/common.sh@366 -- # decimal 2 00:04:53.228 01:22:32 -- scripts/common.sh@353 -- # local d=2 00:04:53.228 01:22:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.228 01:22:32 -- scripts/common.sh@355 -- # echo 2 00:04:53.228 01:22:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.228 01:22:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.228 01:22:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.228 01:22:32 -- scripts/common.sh@368 -- # return 0 00:04:53.228 01:22:32 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.228 01:22:32 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.228 --rc genhtml_branch_coverage=1 00:04:53.228 --rc genhtml_function_coverage=1 00:04:53.228 --rc genhtml_legend=1 00:04:53.228 --rc geninfo_all_blocks=1 00:04:53.228 --rc geninfo_unexecuted_blocks=1 00:04:53.228 00:04:53.228 ' 00:04:53.228 01:22:32 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.228 --rc genhtml_branch_coverage=1 00:04:53.228 --rc genhtml_function_coverage=1 00:04:53.228 --rc genhtml_legend=1 00:04:53.228 --rc geninfo_all_blocks=1 00:04:53.228 --rc geninfo_unexecuted_blocks=1 00:04:53.228 00:04:53.228 ' 00:04:53.228 01:22:32 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.228 --rc genhtml_branch_coverage=1 00:04:53.228 --rc genhtml_function_coverage=1 00:04:53.228 --rc genhtml_legend=1 00:04:53.228 --rc geninfo_all_blocks=1 00:04:53.228 --rc geninfo_unexecuted_blocks=1 00:04:53.228 00:04:53.228 ' 00:04:53.228 01:22:32 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.228 --rc genhtml_branch_coverage=1 00:04:53.228 --rc genhtml_function_coverage=1 00:04:53.228 --rc genhtml_legend=1 00:04:53.228 --rc geninfo_all_blocks=1 00:04:53.228 --rc geninfo_unexecuted_blocks=1 00:04:53.228 00:04:53.228 ' 00:04:53.228 01:22:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.228 01:22:32 -- nvmf/common.sh@7 -- # uname -s 00:04:53.228 01:22:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.228 01:22:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.228 01:22:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.228 01:22:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.228 01:22:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.228 01:22:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.228 01:22:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.228 01:22:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.228 01:22:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.228 01:22:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.228 01:22:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:53.228 01:22:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:53.228 01:22:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.228 01:22:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.228 01:22:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:53.228 01:22:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.228 01:22:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.228 01:22:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.228 01:22:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.228 01:22:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.228 01:22:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.228 01:22:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.229 01:22:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.229 01:22:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.229 01:22:32 -- paths/export.sh@5 -- # export PATH 00:04:53.229 01:22:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.229 01:22:32 -- nvmf/common.sh@51 -- # : 0 00:04:53.229 01:22:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.229 01:22:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.229 01:22:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.229 01:22:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.229 01:22:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.229 01:22:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.229 01:22:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.229 01:22:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.229 01:22:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.229 01:22:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:53.229 01:22:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:53.229 01:22:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:53.229 01:22:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:53.229 01:22:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:53.229 01:22:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:53.229 01:22:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:53.229 01:22:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:53.229 01:22:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:53.229 01:22:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:53.229 01:22:32 -- spdk/autotest.sh@48 -- # udevadm_pid=748641 00:04:53.229 01:22:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:53.229 01:22:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:53.229 01:22:32 -- pm/common@17 -- # local monitor 00:04:53.229 01:22:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.229 01:22:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.229 01:22:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.229 01:22:32 -- pm/common@21 -- # date +%s 00:04:53.229 01:22:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.229 01:22:32 -- pm/common@21 -- # date +%s 00:04:53.229 01:22:32 -- pm/common@25 -- # sleep 1 00:04:53.229 01:22:32 -- pm/common@21 -- # date +%s 00:04:53.229 01:22:32 -- pm/common@21 -- # date +%s 00:04:53.229 01:22:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727738552 00:04:53.229 01:22:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727738552 00:04:53.229 01:22:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727738552 00:04:53.229 01:22:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727738552 00:04:53.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727738552_collect-vmstat.pm.log 00:04:53.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727738552_collect-cpu-load.pm.log 00:04:53.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727738552_collect-cpu-temp.pm.log 00:04:53.229 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727738552_collect-bmc-pm.bmc.pm.log 00:04:54.162 01:22:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:54.162 01:22:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:54.162 01:22:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.162 01:22:33 -- common/autotest_common.sh@10 -- # set +x 00:04:54.162 01:22:33 -- spdk/autotest.sh@59 -- # create_test_list 00:04:54.162 01:22:33 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:54.162 01:22:33 -- common/autotest_common.sh@10 -- # set +x 00:04:54.162 01:22:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:54.162 01:22:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.162 01:22:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.162 01:22:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:54.162 01:22:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.162 01:22:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:54.162 01:22:34 -- common/autotest_common.sh@1455 -- # uname 00:04:54.162 01:22:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:54.162 01:22:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:54.162 01:22:34 -- common/autotest_common.sh@1475 -- # uname 00:04:54.162 01:22:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:54.162 01:22:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:54.162 01:22:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:54.420 lcov: LCOV version 1.15 00:04:54.420 01:22:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:16.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:16.332 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:42.909 01:23:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:42.909 01:23:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.909 01:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:42.909 01:23:18 -- spdk/autotest.sh@78 -- # rm -f 00:05:42.909 01:23:18 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:42.909 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:42.909 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:42.909 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:42.909 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:42.909 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:42.909 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:42.909 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:42.909 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:42.910 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:42.910 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:42.910 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:42.910 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:42.910 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:42.910 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:42.910 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:42.910 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:42.910 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:42.910 01:23:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:42.910 01:23:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:42.910 01:23:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:42.910 01:23:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:42.910 01:23:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:42.910 01:23:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:42.910 01:23:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:42.910 01:23:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:42.910 01:23:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:42.910 01:23:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:42.910 01:23:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:42.910 01:23:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:42.910 01:23:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:42.910 01:23:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:42.910 01:23:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:42.910 No valid GPT data, bailing 00:05:42.910 01:23:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:42.910 01:23:20 -- scripts/common.sh@394 -- # pt= 00:05:42.910 01:23:20 -- scripts/common.sh@395 -- # return 1 00:05:42.910 01:23:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:42.910 1+0 records in 00:05:42.910 1+0 records out 00:05:42.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00228781 s, 458 MB/s 00:05:42.910 01:23:20 -- spdk/autotest.sh@105 -- # sync 00:05:42.910 01:23:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:42.910 01:23:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:42.910 01:23:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:42.910 01:23:22 -- spdk/autotest.sh@111 -- # uname -s 00:05:42.910 01:23:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:42.910 01:23:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:42.910 01:23:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:43.845 Hugepages 00:05:43.845 node hugesize free / total 00:05:43.845 node0 1048576kB 0 / 0 00:05:43.845 node0 2048kB 0 / 0 00:05:43.845 node1 1048576kB 0 / 0 00:05:43.845 node1 2048kB 0 / 0 00:05:43.845 00:05:43.845 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.845 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:43.845 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:43.845 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:43.845 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:43.845 01:23:23 -- spdk/autotest.sh@117 -- # uname -s 00:05:43.845 01:23:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:43.845 01:23:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:43.845 01:23:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:45.218 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.218 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.218 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:46.154 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:46.154 01:23:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:47.529 01:23:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:47.529 01:23:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:47.529 01:23:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.529 01:23:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:47.529 01:23:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.529 01:23:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.529 01:23:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.529 01:23:26 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:47.529 01:23:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.529 01:23:27 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:47.529 01:23:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:47.529 01:23:27 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:48.462 Waiting for block devices as requested 00:05:48.462 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:48.462 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:48.722 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:48.722 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:48.722 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:48.981 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:48.981 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:48.981 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:48.981 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:48.981 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:49.238 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:49.238 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:49.238 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:49.497 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:49.497 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:49.497 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:49.497 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:49.755 01:23:29 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.755 01:23:29 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:05:49.755 01:23:29 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:49.755 01:23:29 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:49.755 01:23:29 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:49.755 01:23:29 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.755 01:23:29 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.755 01:23:29 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:49.756 01:23:29 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.756 01:23:29 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.756 01:23:29 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:49.756 01:23:29 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.756 01:23:29 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.756 01:23:29 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.756 01:23:29 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.756 01:23:29 -- common/autotest_common.sh@1541 -- # continue 00:05:49.756 01:23:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.756 01:23:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.756 01:23:29 -- common/autotest_common.sh@10 -- # set +x 00:05:49.756 01:23:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.756 01:23:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.756 01:23:29 -- common/autotest_common.sh@10 -- # set +x 00:05:49.756 01:23:29 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.131 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:51.131 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:51.131 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:52.068 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:52.068 01:23:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:52.068 01:23:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.068 01:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.068 01:23:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:52.068 01:23:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:52.068 01:23:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:52.068 01:23:31 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:52.068 01:23:31 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:52.068 01:23:31 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:52.068 01:23:31 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:52.068 01:23:31 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:52.068 01:23:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:52.068 01:23:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:52.068 01:23:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.068 01:23:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:52.068 01:23:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:52.068 01:23:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:52.068 01:23:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:52.068 01:23:31 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:52.068 01:23:31 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:52.068 01:23:31 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:52.068 01:23:31 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:52.068 01:23:31 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:52.068 01:23:31 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:52.068 01:23:31 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:52.068 01:23:31 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:52.068 01:23:31 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=760253 00:05:52.068 01:23:31 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.068 01:23:31 -- common/autotest_common.sh@1583 -- # waitforlisten 760253 00:05:52.068 01:23:31 -- common/autotest_common.sh@831 -- # '[' -z 760253 ']' 00:05:52.068 01:23:31 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.068 01:23:31 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.068 01:23:31 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.068 01:23:31 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.068 01:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.068 [2024-10-01 01:23:31.906177] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:52.068 [2024-10-01 01:23:31.906259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760253 ] 00:05:52.328 [2024-10-01 01:23:31.972675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.328 [2024-10-01 01:23:32.063222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.586 01:23:32 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.586 01:23:32 -- common/autotest_common.sh@864 -- # return 0 00:05:52.586 01:23:32 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:52.586 01:23:32 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:52.586 01:23:32 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:55.866 nvme0n1 00:05:55.866 01:23:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:55.866 [2024-10-01 01:23:35.689754] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:55.866 [2024-10-01 01:23:35.689806] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:55.866 request: 00:05:55.866 { 00:05:55.866 "nvme_ctrlr_name": "nvme0", 00:05:55.866 "password": "test", 00:05:55.866 "method": "bdev_nvme_opal_revert", 00:05:55.866 "req_id": 1 00:05:55.866 } 00:05:55.866 Got JSON-RPC error response 00:05:55.866 response: 00:05:55.866 { 00:05:55.866 "code": -32603, 00:05:55.866 "message": "Internal error" 00:05:55.866 } 00:05:55.866 01:23:35 -- common/autotest_common.sh@1589 -- # true 00:05:55.866 01:23:35 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:55.866 01:23:35 -- common/autotest_common.sh@1593 -- # killprocess 760253 00:05:55.866 01:23:35 -- common/autotest_common.sh@950 -- # '[' -z 760253 ']' 00:05:55.866 01:23:35 -- common/autotest_common.sh@954 -- # kill -0 760253 00:05:55.866 01:23:35 -- common/autotest_common.sh@955 -- # uname 00:05:55.866 01:23:35 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.866 01:23:35 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 760253 00:05:56.124 01:23:35 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.124 01:23:35 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.124 01:23:35 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 760253' 00:05:56.124 killing process with pid 760253 00:05:56.124 01:23:35 -- common/autotest_common.sh@969 -- # kill 760253 00:05:56.124 01:23:35 -- common/autotest_common.sh@974 -- # wait 760253 00:05:58.019 01:23:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:58.019 01:23:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:58.019 01:23:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:58.019 01:23:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:58.019 01:23:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:58.019 01:23:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.019 01:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.019 01:23:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:58.019 01:23:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.019 01:23:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.019 01:23:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.019 01:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.019 ************************************ 00:05:58.019 START TEST env 00:05:58.019 ************************************ 00:05:58.019 01:23:37 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:58.020 * Looking for test storage... 00:05:58.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.020 01:23:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.020 01:23:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.020 01:23:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.020 01:23:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.020 01:23:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.020 01:23:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.020 01:23:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.020 01:23:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.020 01:23:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.020 01:23:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.020 01:23:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.020 01:23:37 env -- scripts/common.sh@344 -- # case "$op" in 00:05:58.020 01:23:37 env -- scripts/common.sh@345 -- # : 1 00:05:58.020 01:23:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.020 01:23:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.020 01:23:37 env -- scripts/common.sh@365 -- # decimal 1 00:05:58.020 01:23:37 env -- scripts/common.sh@353 -- # local d=1 00:05:58.020 01:23:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.020 01:23:37 env -- scripts/common.sh@355 -- # echo 1 00:05:58.020 01:23:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.020 01:23:37 env -- scripts/common.sh@366 -- # decimal 2 00:05:58.020 01:23:37 env -- scripts/common.sh@353 -- # local d=2 00:05:58.020 01:23:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.020 01:23:37 env -- scripts/common.sh@355 -- # echo 2 00:05:58.020 01:23:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.020 01:23:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.020 01:23:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.020 01:23:37 env -- scripts/common.sh@368 -- # return 0 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.020 --rc genhtml_branch_coverage=1 00:05:58.020 --rc genhtml_function_coverage=1 00:05:58.020 --rc genhtml_legend=1 00:05:58.020 --rc geninfo_all_blocks=1 00:05:58.020 --rc geninfo_unexecuted_blocks=1 00:05:58.020 00:05:58.020 ' 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.020 --rc genhtml_branch_coverage=1 00:05:58.020 --rc genhtml_function_coverage=1 00:05:58.020 --rc genhtml_legend=1 00:05:58.020 --rc geninfo_all_blocks=1 00:05:58.020 --rc geninfo_unexecuted_blocks=1 00:05:58.020 00:05:58.020 ' 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.020 --rc genhtml_branch_coverage=1 00:05:58.020 --rc genhtml_function_coverage=1 00:05:58.020 --rc genhtml_legend=1 00:05:58.020 --rc geninfo_all_blocks=1 00:05:58.020 --rc geninfo_unexecuted_blocks=1 00:05:58.020 00:05:58.020 ' 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.020 --rc genhtml_branch_coverage=1 00:05:58.020 --rc genhtml_function_coverage=1 00:05:58.020 --rc genhtml_legend=1 00:05:58.020 --rc geninfo_all_blocks=1 00:05:58.020 --rc geninfo_unexecuted_blocks=1 00:05:58.020 00:05:58.020 ' 00:05:58.020 01:23:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.020 01:23:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.020 01:23:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.020 ************************************ 00:05:58.020 START TEST env_memory 00:05:58.020 ************************************ 00:05:58.020 01:23:37 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:58.020 00:05:58.020 00:05:58.020 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.020 http://cunit.sourceforge.net/ 00:05:58.020 00:05:58.020 00:05:58.020 Suite: memory 00:05:58.020 Test: alloc and free memory map ...[2024-10-01 01:23:37.783565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.020 passed 00:05:58.020 Test: mem map translation ...[2024-10-01 01:23:37.803893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.020 [2024-10-01 01:23:37.803914] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.020 [2024-10-01 01:23:37.803972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.020 [2024-10-01 01:23:37.803984] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:58.020 passed 00:05:58.020 Test: mem map registration ...[2024-10-01 01:23:37.845468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:58.020 [2024-10-01 01:23:37.845487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:58.020 passed 00:05:58.278 Test: mem map adjacent registrations ...passed 00:05:58.278 00:05:58.278 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.278 suites 1 1 n/a 0 0 00:05:58.278 tests 4 4 4 0 0 00:05:58.278 asserts 152 152 152 0 n/a 00:05:58.278 00:05:58.278 Elapsed time = 0.145 seconds 00:05:58.278 00:05:58.278 real 0m0.153s 00:05:58.278 user 0m0.145s 00:05:58.278 sys 0m0.007s 00:05:58.278 01:23:37 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.278 01:23:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:58.278 ************************************ 00:05:58.278 END TEST env_memory 00:05:58.278 ************************************ 00:05:58.278 01:23:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:58.278 01:23:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.278 01:23:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.278 01:23:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.278 ************************************ 00:05:58.278 START TEST env_vtophys 00:05:58.278 ************************************ 00:05:58.278 01:23:37 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:58.278 EAL: lib.eal log level changed from notice to debug 00:05:58.278 EAL: Detected lcore 0 as core 0 on socket 0 00:05:58.278 EAL: Detected lcore 1 as core 1 on socket 0 00:05:58.278 EAL: Detected lcore 2 as core 2 on socket 0 00:05:58.278 EAL: Detected lcore 3 as core 3 on socket 0 00:05:58.278 EAL: Detected lcore 4 as core 4 on socket 0 00:05:58.278 EAL: Detected lcore 5 as core 5 on socket 0 00:05:58.278 EAL: Detected lcore 6 as core 8 on socket 0 00:05:58.278 EAL: Detected lcore 7 as core 9 on socket 0 00:05:58.278 EAL: Detected lcore 8 as core 10 on socket 0 00:05:58.278 EAL: Detected lcore 9 as core 11 on socket 0 00:05:58.278 EAL: Detected lcore 10 as core 12 on socket 0 00:05:58.278 EAL: Detected lcore 11 as core 13 on socket 0 00:05:58.278 EAL: Detected lcore 12 as core 0 on socket 1 00:05:58.278 EAL: Detected lcore 13 as core 1 on socket 1 00:05:58.278 EAL: Detected lcore 14 as core 2 on socket 1 00:05:58.278 EAL: Detected lcore 15 as core 3 on socket 1 00:05:58.278 EAL: Detected lcore 16 as core 4 on socket 1 00:05:58.278 EAL: Detected lcore 17 as core 5 on socket 1 00:05:58.278 EAL: Detected lcore 18 as core 8 on socket 1 00:05:58.278 EAL: Detected lcore 19 as core 9 on socket 1 00:05:58.278 EAL: Detected lcore 20 as core 10 on socket 1 00:05:58.278 EAL: Detected lcore 21 as core 11 on socket 1 00:05:58.278 EAL: Detected lcore 22 as core 12 on socket 1 00:05:58.278 EAL: Detected lcore 23 as core 13 on socket 1 00:05:58.278 EAL: Detected lcore 24 as core 0 on socket 0 00:05:58.278 EAL: Detected lcore 25 as core 1 on socket 0 00:05:58.278 EAL: Detected lcore 26 as core 2 on socket 0 00:05:58.278 EAL: Detected lcore 27 as core 3 on socket 0 00:05:58.278 EAL: Detected lcore 28 as core 4 on socket 0 00:05:58.278 EAL: Detected lcore 29 as core 5 on socket 0 00:05:58.278 EAL: Detected lcore 30 as core 8 on socket 0 00:05:58.278 EAL: Detected lcore 31 as core 9 on socket 0 00:05:58.278 EAL: Detected lcore 32 as core 10 on socket 0 00:05:58.278 EAL: Detected lcore 33 as core 11 on socket 0 00:05:58.278 EAL: Detected lcore 34 as core 12 on socket 0 00:05:58.278 EAL: Detected lcore 35 as core 13 on socket 0 00:05:58.278 EAL: Detected lcore 36 as core 0 on socket 1 00:05:58.278 EAL: Detected lcore 37 as core 1 on socket 1 00:05:58.278 EAL: Detected lcore 38 as core 2 on socket 1 00:05:58.278 EAL: Detected lcore 39 as core 3 on socket 1 00:05:58.278 EAL: Detected lcore 40 as core 4 on socket 1 00:05:58.278 EAL: Detected lcore 41 as core 5 on socket 1 00:05:58.278 EAL: Detected lcore 42 as core 8 on socket 1 00:05:58.278 EAL: Detected lcore 43 as core 9 on socket 1 00:05:58.278 EAL: Detected lcore 44 as core 10 on socket 1 00:05:58.278 EAL: Detected lcore 45 as core 11 on socket 1 00:05:58.278 EAL: Detected lcore 46 as core 12 on socket 1 00:05:58.278 EAL: Detected lcore 47 as core 13 on socket 1 00:05:58.278 EAL: Maximum logical cores by configuration: 128 00:05:58.278 EAL: Detected CPU lcores: 48 00:05:58.278 EAL: Detected NUMA nodes: 2 00:05:58.278 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:58.278 EAL: Detected shared linkage of DPDK 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:58.278 EAL: Registered [vdev] bus. 00:05:58.278 EAL: bus.vdev log level changed from disabled to notice 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:58.278 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:58.278 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:58.278 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:58.278 EAL: No shared files mode enabled, IPC will be disabled 00:05:58.278 EAL: No shared files mode enabled, IPC is disabled 00:05:58.278 EAL: Bus pci wants IOVA as 'DC' 00:05:58.278 EAL: Bus vdev wants IOVA as 'DC' 00:05:58.278 EAL: Buses did not request a specific IOVA mode. 00:05:58.278 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:58.278 EAL: Selected IOVA mode 'VA' 00:05:58.278 EAL: Probing VFIO support... 00:05:58.278 EAL: IOMMU type 1 (Type 1) is supported 00:05:58.278 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:58.278 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:58.278 EAL: VFIO support initialized 00:05:58.278 EAL: Ask a virtual area of 0x2e000 bytes 00:05:58.278 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:58.278 EAL: Setting up physically contiguous memory... 00:05:58.278 EAL: Setting maximum number of open files to 524288 00:05:58.278 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:58.278 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:58.278 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:58.278 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.278 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:58.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.278 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.278 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:58.278 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:58.278 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.278 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:58.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.278 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.278 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:58.278 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:58.278 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.278 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:58.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.278 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.278 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:58.278 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:58.278 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.278 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:58.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.278 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.278 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:58.278 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:58.278 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:58.278 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.278 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:58.278 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.279 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.279 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:58.279 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:58.279 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.279 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:58.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.279 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.279 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:58.279 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:58.279 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.279 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:58.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.279 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.279 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:58.279 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:58.279 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.279 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:58.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:58.279 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.279 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:58.279 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:58.279 EAL: Hugepages will be freed exactly as allocated. 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: TSC frequency is ~2700000 KHz 00:05:58.279 EAL: Main lcore 0 is ready (tid=7f6d9a6fca00;cpuset=[0]) 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 0 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 2MB 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:58.279 EAL: Mem event callback 'spdk:(nil)' registered 00:05:58.279 00:05:58.279 00:05:58.279 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.279 http://cunit.sourceforge.net/ 00:05:58.279 00:05:58.279 00:05:58.279 Suite: components_suite 00:05:58.279 Test: vtophys_malloc_test ...passed 00:05:58.279 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 4MB 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was shrunk by 4MB 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 6MB 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was shrunk by 6MB 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 10MB 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was shrunk by 10MB 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 18MB 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was shrunk by 18MB 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.279 EAL: Trying to obtain current memory policy. 00:05:58.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.279 EAL: Restoring previous memory policy: 4 00:05:58.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.279 EAL: request: mp_malloc_sync 00:05:58.279 EAL: No shared files mode enabled, IPC is disabled 00:05:58.279 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.537 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.537 EAL: request: mp_malloc_sync 00:05:58.537 EAL: No shared files mode enabled, IPC is disabled 00:05:58.537 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.537 EAL: Trying to obtain current memory policy. 00:05:58.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.537 EAL: Restoring previous memory policy: 4 00:05:58.537 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.537 EAL: request: mp_malloc_sync 00:05:58.537 EAL: No shared files mode enabled, IPC is disabled 00:05:58.537 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.537 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.537 EAL: request: mp_malloc_sync 00:05:58.537 EAL: No shared files mode enabled, IPC is disabled 00:05:58.537 EAL: Heap on socket 0 was shrunk by 258MB 00:05:58.537 EAL: Trying to obtain current memory policy. 00:05:58.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.794 EAL: Restoring previous memory policy: 4 00:05:58.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.794 EAL: request: mp_malloc_sync 00:05:58.794 EAL: No shared files mode enabled, IPC is disabled 00:05:58.794 EAL: Heap on socket 0 was expanded by 514MB 00:05:58.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.051 EAL: request: mp_malloc_sync 00:05:59.051 EAL: No shared files mode enabled, IPC is disabled 00:05:59.051 EAL: Heap on socket 0 was shrunk by 514MB 00:05:59.051 EAL: Trying to obtain current memory policy. 00:05:59.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.309 EAL: Restoring previous memory policy: 4 00:05:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.309 EAL: request: mp_malloc_sync 00:05:59.309 EAL: No shared files mode enabled, IPC is disabled 00:05:59.309 EAL: Heap on socket 0 was expanded by 1026MB 00:05:59.566 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.824 EAL: request: mp_malloc_sync 00:05:59.824 EAL: No shared files mode enabled, IPC is disabled 00:05:59.824 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:59.824 passed 00:05:59.824 00:05:59.824 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.824 suites 1 1 n/a 0 0 00:05:59.824 tests 2 2 2 0 0 00:05:59.824 asserts 497 497 497 0 n/a 00:05:59.824 00:05:59.824 Elapsed time = 1.405 seconds 00:05:59.824 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.824 EAL: request: mp_malloc_sync 00:05:59.824 EAL: No shared files mode enabled, IPC is disabled 00:05:59.824 EAL: Heap on socket 0 was shrunk by 2MB 00:05:59.824 EAL: No shared files mode enabled, IPC is disabled 00:05:59.824 EAL: No shared files mode enabled, IPC is disabled 00:05:59.824 EAL: No shared files mode enabled, IPC is disabled 00:05:59.824 00:05:59.824 real 0m1.525s 00:05:59.824 user 0m0.888s 00:05:59.824 sys 0m0.599s 00:05:59.824 01:23:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.824 01:23:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 ************************************ 00:05:59.824 END TEST env_vtophys 00:05:59.824 ************************************ 00:05:59.824 01:23:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.824 01:23:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.824 01:23:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.824 01:23:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.824 ************************************ 00:05:59.824 START TEST env_pci 00:05:59.824 ************************************ 00:05:59.824 01:23:39 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:59.824 00:05:59.824 00:05:59.824 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.824 http://cunit.sourceforge.net/ 00:05:59.824 00:05:59.824 00:05:59.824 Suite: pci 00:05:59.824 Test: pci_hook ...[2024-10-01 01:23:39.521263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 761226 has claimed it 00:05:59.824 EAL: Cannot find device (10000:00:01.0) 00:05:59.824 EAL: Failed to attach device on primary process 00:05:59.824 passed 00:05:59.824 00:05:59.824 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.824 suites 1 1 n/a 0 0 00:05:59.825 tests 1 1 1 0 0 00:05:59.825 asserts 25 25 25 0 n/a 00:05:59.825 00:05:59.825 Elapsed time = 0.021 seconds 00:05:59.825 00:05:59.825 real 0m0.032s 00:05:59.825 user 0m0.013s 00:05:59.825 sys 0m0.019s 00:05:59.825 01:23:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.825 01:23:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 ************************************ 00:05:59.825 END TEST env_pci 00:05:59.825 ************************************ 00:05:59.825 01:23:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.825 01:23:39 env -- env/env.sh@15 -- # uname 00:05:59.825 01:23:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.825 01:23:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.825 01:23:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.825 01:23:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:59.825 01:23:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.825 01:23:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.825 ************************************ 00:05:59.825 START TEST env_dpdk_post_init 00:05:59.825 ************************************ 00:05:59.825 01:23:39 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.825 EAL: Detected CPU lcores: 48 00:05:59.825 EAL: Detected NUMA nodes: 2 00:05:59.825 EAL: Detected shared linkage of DPDK 00:05:59.825 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.825 EAL: Selected IOVA mode 'VA' 00:05:59.825 EAL: VFIO support initialized 00:05:59.825 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.084 EAL: Using IOMMU type 1 (Type 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:00.084 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:01.018 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:04.297 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:04.297 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:04.297 Starting DPDK initialization... 00:06:04.297 Starting SPDK post initialization... 00:06:04.297 SPDK NVMe probe 00:06:04.297 Attaching to 0000:88:00.0 00:06:04.297 Attached to 0000:88:00.0 00:06:04.297 Cleaning up... 00:06:04.297 00:06:04.297 real 0m4.391s 00:06:04.297 user 0m3.265s 00:06:04.297 sys 0m0.181s 00:06:04.297 01:23:43 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.297 01:23:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.297 ************************************ 00:06:04.297 END TEST env_dpdk_post_init 00:06:04.297 ************************************ 00:06:04.297 01:23:44 env -- env/env.sh@26 -- # uname 00:06:04.297 01:23:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:04.297 01:23:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.297 01:23:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.298 01:23:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.298 01:23:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.298 ************************************ 00:06:04.298 START TEST env_mem_callbacks 00:06:04.298 ************************************ 00:06:04.298 01:23:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.298 EAL: Detected CPU lcores: 48 00:06:04.298 EAL: Detected NUMA nodes: 2 00:06:04.298 EAL: Detected shared linkage of DPDK 00:06:04.298 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.298 EAL: Selected IOVA mode 'VA' 00:06:04.298 EAL: VFIO support initialized 00:06:04.298 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.298 00:06:04.298 00:06:04.298 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.298 http://cunit.sourceforge.net/ 00:06:04.298 00:06:04.298 00:06:04.298 Suite: memory 00:06:04.298 Test: test ... 00:06:04.298 register 0x200000200000 2097152 00:06:04.298 malloc 3145728 00:06:04.298 register 0x200000400000 4194304 00:06:04.298 buf 0x200000500000 len 3145728 PASSED 00:06:04.298 malloc 64 00:06:04.298 buf 0x2000004fff40 len 64 PASSED 00:06:04.298 malloc 4194304 00:06:04.298 register 0x200000800000 6291456 00:06:04.298 buf 0x200000a00000 len 4194304 PASSED 00:06:04.298 free 0x200000500000 3145728 00:06:04.298 free 0x2000004fff40 64 00:06:04.298 unregister 0x200000400000 4194304 PASSED 00:06:04.298 free 0x200000a00000 4194304 00:06:04.298 unregister 0x200000800000 6291456 PASSED 00:06:04.298 malloc 8388608 00:06:04.298 register 0x200000400000 10485760 00:06:04.298 buf 0x200000600000 len 8388608 PASSED 00:06:04.298 free 0x200000600000 8388608 00:06:04.298 unregister 0x200000400000 10485760 PASSED 00:06:04.298 passed 00:06:04.298 00:06:04.298 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.298 suites 1 1 n/a 0 0 00:06:04.298 tests 1 1 1 0 0 00:06:04.298 asserts 15 15 15 0 n/a 00:06:04.298 00:06:04.298 Elapsed time = 0.005 seconds 00:06:04.298 00:06:04.298 real 0m0.050s 00:06:04.298 user 0m0.012s 00:06:04.298 sys 0m0.037s 00:06:04.298 01:23:44 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.298 01:23:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:04.298 ************************************ 00:06:04.298 END TEST env_mem_callbacks 00:06:04.298 ************************************ 00:06:04.298 00:06:04.298 real 0m6.523s 00:06:04.298 user 0m4.518s 00:06:04.298 sys 0m1.046s 00:06:04.298 01:23:44 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.298 01:23:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.298 ************************************ 00:06:04.298 END TEST env 00:06:04.298 ************************************ 00:06:04.298 01:23:44 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:04.298 01:23:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.298 01:23:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.298 01:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:04.557 ************************************ 00:06:04.557 START TEST rpc 00:06:04.557 ************************************ 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:04.557 * Looking for test storage... 00:06:04.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.557 01:23:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.557 01:23:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.557 01:23:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.557 01:23:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.557 01:23:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.557 01:23:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.557 01:23:44 rpc -- scripts/common.sh@345 -- # : 1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.557 01:23:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.557 01:23:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.557 01:23:44 rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.557 01:23:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.557 01:23:44 rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.557 01:23:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.557 01:23:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.557 01:23:44 rpc -- scripts/common.sh@368 -- # return 0 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.557 --rc genhtml_branch_coverage=1 00:06:04.557 --rc genhtml_function_coverage=1 00:06:04.557 --rc genhtml_legend=1 00:06:04.557 --rc geninfo_all_blocks=1 00:06:04.557 --rc geninfo_unexecuted_blocks=1 00:06:04.557 00:06:04.557 ' 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.557 --rc genhtml_branch_coverage=1 00:06:04.557 --rc genhtml_function_coverage=1 00:06:04.557 --rc genhtml_legend=1 00:06:04.557 --rc geninfo_all_blocks=1 00:06:04.557 --rc geninfo_unexecuted_blocks=1 00:06:04.557 00:06:04.557 ' 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.557 --rc genhtml_branch_coverage=1 00:06:04.557 --rc genhtml_function_coverage=1 00:06:04.557 --rc genhtml_legend=1 00:06:04.557 --rc geninfo_all_blocks=1 00:06:04.557 --rc geninfo_unexecuted_blocks=1 00:06:04.557 00:06:04.557 ' 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.557 --rc genhtml_branch_coverage=1 00:06:04.557 --rc genhtml_function_coverage=1 00:06:04.557 --rc genhtml_legend=1 00:06:04.557 --rc geninfo_all_blocks=1 00:06:04.557 --rc geninfo_unexecuted_blocks=1 00:06:04.557 00:06:04.557 ' 00:06:04.557 01:23:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=761942 00:06:04.557 01:23:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:04.557 01:23:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.557 01:23:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 761942 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@831 -- # '[' -z 761942 ']' 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.557 01:23:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.557 [2024-10-01 01:23:44.348147] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:04.557 [2024-10-01 01:23:44.348227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761942 ] 00:06:04.557 [2024-10-01 01:23:44.406073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.815 [2024-10-01 01:23:44.494699] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:04.815 [2024-10-01 01:23:44.494771] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 761942' to capture a snapshot of events at runtime. 00:06:04.815 [2024-10-01 01:23:44.494793] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.815 [2024-10-01 01:23:44.494807] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.815 [2024-10-01 01:23:44.494818] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid761942 for offline analysis/debug. 00:06:04.815 [2024-10-01 01:23:44.494849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.073 01:23:44 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.073 01:23:44 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.073 01:23:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.073 01:23:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:05.073 01:23:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:05.073 01:23:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:05.073 01:23:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.073 01:23:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.073 01:23:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.073 ************************************ 00:06:05.073 START TEST rpc_integrity 00:06:05.073 ************************************ 00:06:05.073 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:05.073 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.073 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.073 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.073 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.073 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.073 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.073 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.073 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.073 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.073 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.074 { 00:06:05.074 "name": "Malloc0", 00:06:05.074 "aliases": [ 00:06:05.074 "2e9b9543-2272-4f8b-9631-1b8a3a388c01" 00:06:05.074 ], 00:06:05.074 "product_name": "Malloc disk", 00:06:05.074 "block_size": 512, 00:06:05.074 "num_blocks": 16384, 00:06:05.074 "uuid": "2e9b9543-2272-4f8b-9631-1b8a3a388c01", 00:06:05.074 "assigned_rate_limits": { 00:06:05.074 "rw_ios_per_sec": 0, 00:06:05.074 "rw_mbytes_per_sec": 0, 00:06:05.074 "r_mbytes_per_sec": 0, 00:06:05.074 "w_mbytes_per_sec": 0 00:06:05.074 }, 00:06:05.074 "claimed": false, 00:06:05.074 "zoned": false, 00:06:05.074 "supported_io_types": { 00:06:05.074 "read": true, 00:06:05.074 "write": true, 00:06:05.074 "unmap": true, 00:06:05.074 "flush": true, 00:06:05.074 "reset": true, 00:06:05.074 "nvme_admin": false, 00:06:05.074 "nvme_io": false, 00:06:05.074 "nvme_io_md": false, 00:06:05.074 "write_zeroes": true, 00:06:05.074 "zcopy": true, 00:06:05.074 "get_zone_info": false, 00:06:05.074 "zone_management": false, 00:06:05.074 "zone_append": false, 00:06:05.074 "compare": false, 00:06:05.074 "compare_and_write": false, 00:06:05.074 "abort": true, 00:06:05.074 "seek_hole": false, 00:06:05.074 "seek_data": false, 00:06:05.074 "copy": true, 00:06:05.074 "nvme_iov_md": false 00:06:05.074 }, 00:06:05.074 "memory_domains": [ 00:06:05.074 { 00:06:05.074 "dma_device_id": "system", 00:06:05.074 "dma_device_type": 1 00:06:05.074 }, 00:06:05.074 { 00:06:05.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.074 "dma_device_type": 2 00:06:05.074 } 00:06:05.074 ], 00:06:05.074 "driver_specific": {} 00:06:05.074 } 00:06:05.074 ]' 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.074 [2024-10-01 01:23:44.912506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:05.074 [2024-10-01 01:23:44.912551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.074 [2024-10-01 01:23:44.912576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x172f2c0 00:06:05.074 [2024-10-01 01:23:44.912592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.074 [2024-10-01 01:23:44.914173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.074 [2024-10-01 01:23:44.914199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.074 Passthru0 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.074 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.074 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.332 { 00:06:05.332 "name": "Malloc0", 00:06:05.332 "aliases": [ 00:06:05.332 "2e9b9543-2272-4f8b-9631-1b8a3a388c01" 00:06:05.332 ], 00:06:05.332 "product_name": "Malloc disk", 00:06:05.332 "block_size": 512, 00:06:05.332 "num_blocks": 16384, 00:06:05.332 "uuid": "2e9b9543-2272-4f8b-9631-1b8a3a388c01", 00:06:05.332 "assigned_rate_limits": { 00:06:05.332 "rw_ios_per_sec": 0, 00:06:05.332 "rw_mbytes_per_sec": 0, 00:06:05.332 "r_mbytes_per_sec": 0, 00:06:05.332 "w_mbytes_per_sec": 0 00:06:05.332 }, 00:06:05.332 "claimed": true, 00:06:05.332 "claim_type": "exclusive_write", 00:06:05.332 "zoned": false, 00:06:05.332 "supported_io_types": { 00:06:05.332 "read": true, 00:06:05.332 "write": true, 00:06:05.332 "unmap": true, 00:06:05.332 "flush": true, 00:06:05.332 "reset": true, 00:06:05.332 "nvme_admin": false, 00:06:05.332 "nvme_io": false, 00:06:05.332 "nvme_io_md": false, 00:06:05.332 "write_zeroes": true, 00:06:05.332 "zcopy": true, 00:06:05.332 "get_zone_info": false, 00:06:05.332 "zone_management": false, 00:06:05.332 "zone_append": false, 00:06:05.332 "compare": false, 00:06:05.332 "compare_and_write": false, 00:06:05.332 "abort": true, 00:06:05.332 "seek_hole": false, 00:06:05.332 "seek_data": false, 00:06:05.332 "copy": true, 00:06:05.332 "nvme_iov_md": false 00:06:05.332 }, 00:06:05.332 "memory_domains": [ 00:06:05.332 { 00:06:05.332 "dma_device_id": "system", 00:06:05.332 "dma_device_type": 1 00:06:05.332 }, 00:06:05.332 { 00:06:05.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.332 "dma_device_type": 2 00:06:05.332 } 00:06:05.332 ], 00:06:05.332 "driver_specific": {} 00:06:05.332 }, 00:06:05.332 { 00:06:05.332 "name": "Passthru0", 00:06:05.332 "aliases": [ 00:06:05.332 "39d96d47-db99-5e1e-9cd8-cab85c575cbd" 00:06:05.332 ], 00:06:05.332 "product_name": "passthru", 00:06:05.332 "block_size": 512, 00:06:05.332 "num_blocks": 16384, 00:06:05.332 "uuid": "39d96d47-db99-5e1e-9cd8-cab85c575cbd", 00:06:05.332 "assigned_rate_limits": { 00:06:05.332 "rw_ios_per_sec": 0, 00:06:05.332 "rw_mbytes_per_sec": 0, 00:06:05.332 "r_mbytes_per_sec": 0, 00:06:05.332 "w_mbytes_per_sec": 0 00:06:05.332 }, 00:06:05.332 "claimed": false, 00:06:05.332 "zoned": false, 00:06:05.332 "supported_io_types": { 00:06:05.332 "read": true, 00:06:05.332 "write": true, 00:06:05.332 "unmap": true, 00:06:05.332 "flush": true, 00:06:05.332 "reset": true, 00:06:05.332 "nvme_admin": false, 00:06:05.332 "nvme_io": false, 00:06:05.332 "nvme_io_md": false, 00:06:05.332 "write_zeroes": true, 00:06:05.332 "zcopy": true, 00:06:05.332 "get_zone_info": false, 00:06:05.332 "zone_management": false, 00:06:05.332 "zone_append": false, 00:06:05.332 "compare": false, 00:06:05.332 "compare_and_write": false, 00:06:05.332 "abort": true, 00:06:05.332 "seek_hole": false, 00:06:05.332 "seek_data": false, 00:06:05.332 "copy": true, 00:06:05.332 "nvme_iov_md": false 00:06:05.332 }, 00:06:05.332 "memory_domains": [ 00:06:05.332 { 00:06:05.332 "dma_device_id": "system", 00:06:05.332 "dma_device_type": 1 00:06:05.332 }, 00:06:05.332 { 00:06:05.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.332 "dma_device_type": 2 00:06:05.332 } 00:06:05.332 ], 00:06:05.332 "driver_specific": { 00:06:05.332 "passthru": { 00:06:05.332 "name": "Passthru0", 00:06:05.332 "base_bdev_name": "Malloc0" 00:06:05.332 } 00:06:05.332 } 00:06:05.332 } 00:06:05.332 ]' 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 01:23:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.332 01:23:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.332 01:23:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.332 00:06:05.332 real 0m0.229s 00:06:05.332 user 0m0.150s 00:06:05.332 sys 0m0.023s 00:06:05.332 01:23:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.332 01:23:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 ************************************ 00:06:05.332 END TEST rpc_integrity 00:06:05.332 ************************************ 00:06:05.332 01:23:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.332 01:23:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.332 01:23:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.332 01:23:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 ************************************ 00:06:05.332 START TEST rpc_plugins 00:06:05.332 ************************************ 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:05.332 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.332 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.332 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.332 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.332 { 00:06:05.332 "name": "Malloc1", 00:06:05.332 "aliases": [ 00:06:05.332 "effe4e62-672b-4c97-a316-6fc85d115dbe" 00:06:05.332 ], 00:06:05.332 "product_name": "Malloc disk", 00:06:05.332 "block_size": 4096, 00:06:05.332 "num_blocks": 256, 00:06:05.332 "uuid": "effe4e62-672b-4c97-a316-6fc85d115dbe", 00:06:05.332 "assigned_rate_limits": { 00:06:05.332 "rw_ios_per_sec": 0, 00:06:05.332 "rw_mbytes_per_sec": 0, 00:06:05.332 "r_mbytes_per_sec": 0, 00:06:05.332 "w_mbytes_per_sec": 0 00:06:05.332 }, 00:06:05.332 "claimed": false, 00:06:05.332 "zoned": false, 00:06:05.332 "supported_io_types": { 00:06:05.332 "read": true, 00:06:05.332 "write": true, 00:06:05.332 "unmap": true, 00:06:05.332 "flush": true, 00:06:05.332 "reset": true, 00:06:05.332 "nvme_admin": false, 00:06:05.332 "nvme_io": false, 00:06:05.332 "nvme_io_md": false, 00:06:05.332 "write_zeroes": true, 00:06:05.332 "zcopy": true, 00:06:05.332 "get_zone_info": false, 00:06:05.332 "zone_management": false, 00:06:05.332 "zone_append": false, 00:06:05.332 "compare": false, 00:06:05.332 "compare_and_write": false, 00:06:05.332 "abort": true, 00:06:05.332 "seek_hole": false, 00:06:05.332 "seek_data": false, 00:06:05.332 "copy": true, 00:06:05.332 "nvme_iov_md": false 00:06:05.333 }, 00:06:05.333 "memory_domains": [ 00:06:05.333 { 00:06:05.333 "dma_device_id": "system", 00:06:05.333 "dma_device_type": 1 00:06:05.333 }, 00:06:05.333 { 00:06:05.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.333 "dma_device_type": 2 00:06:05.333 } 00:06:05.333 ], 00:06:05.333 "driver_specific": {} 00:06:05.333 } 00:06:05.333 ]' 00:06:05.333 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:05.333 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.333 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.333 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.333 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.333 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.333 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.333 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.333 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.333 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.333 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.333 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:05.590 01:23:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:05.590 00:06:05.590 real 0m0.112s 00:06:05.590 user 0m0.075s 00:06:05.590 sys 0m0.011s 00:06:05.590 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.590 01:23:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.590 ************************************ 00:06:05.590 END TEST rpc_plugins 00:06:05.590 ************************************ 00:06:05.590 01:23:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:05.590 01:23:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.590 01:23:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.590 01:23:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.590 ************************************ 00:06:05.590 START TEST rpc_trace_cmd_test 00:06:05.591 ************************************ 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:05.591 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid761942", 00:06:05.591 "tpoint_group_mask": "0x8", 00:06:05.591 "iscsi_conn": { 00:06:05.591 "mask": "0x2", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "scsi": { 00:06:05.591 "mask": "0x4", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "bdev": { 00:06:05.591 "mask": "0x8", 00:06:05.591 "tpoint_mask": "0xffffffffffffffff" 00:06:05.591 }, 00:06:05.591 "nvmf_rdma": { 00:06:05.591 "mask": "0x10", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "nvmf_tcp": { 00:06:05.591 "mask": "0x20", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "ftl": { 00:06:05.591 "mask": "0x40", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "blobfs": { 00:06:05.591 "mask": "0x80", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "dsa": { 00:06:05.591 "mask": "0x200", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "thread": { 00:06:05.591 "mask": "0x400", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "nvme_pcie": { 00:06:05.591 "mask": "0x800", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "iaa": { 00:06:05.591 "mask": "0x1000", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "nvme_tcp": { 00:06:05.591 "mask": "0x2000", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "bdev_nvme": { 00:06:05.591 "mask": "0x4000", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "sock": { 00:06:05.591 "mask": "0x8000", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "blob": { 00:06:05.591 "mask": "0x10000", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 }, 00:06:05.591 "bdev_raid": { 00:06:05.591 "mask": "0x20000", 00:06:05.591 "tpoint_mask": "0x0" 00:06:05.591 } 00:06:05.591 }' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.591 00:06:05.591 real 0m0.201s 00:06:05.591 user 0m0.177s 00:06:05.591 sys 0m0.014s 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.591 01:23:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.591 ************************************ 00:06:05.591 END TEST rpc_trace_cmd_test 00:06:05.591 ************************************ 00:06:05.849 01:23:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.849 01:23:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.849 01:23:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.849 01:23:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.849 01:23:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.849 01:23:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.849 ************************************ 00:06:05.849 START TEST rpc_daemon_integrity 00:06:05.849 ************************************ 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.849 { 00:06:05.849 "name": "Malloc2", 00:06:05.849 "aliases": [ 00:06:05.849 "f390b6c9-b4ab-4a65-ab8a-d948374459d7" 00:06:05.849 ], 00:06:05.849 "product_name": "Malloc disk", 00:06:05.849 "block_size": 512, 00:06:05.849 "num_blocks": 16384, 00:06:05.849 "uuid": "f390b6c9-b4ab-4a65-ab8a-d948374459d7", 00:06:05.849 "assigned_rate_limits": { 00:06:05.849 "rw_ios_per_sec": 0, 00:06:05.849 "rw_mbytes_per_sec": 0, 00:06:05.849 "r_mbytes_per_sec": 0, 00:06:05.849 "w_mbytes_per_sec": 0 00:06:05.849 }, 00:06:05.849 "claimed": false, 00:06:05.849 "zoned": false, 00:06:05.849 "supported_io_types": { 00:06:05.849 "read": true, 00:06:05.849 "write": true, 00:06:05.849 "unmap": true, 00:06:05.849 "flush": true, 00:06:05.849 "reset": true, 00:06:05.849 "nvme_admin": false, 00:06:05.849 "nvme_io": false, 00:06:05.849 "nvme_io_md": false, 00:06:05.849 "write_zeroes": true, 00:06:05.849 "zcopy": true, 00:06:05.849 "get_zone_info": false, 00:06:05.849 "zone_management": false, 00:06:05.849 "zone_append": false, 00:06:05.849 "compare": false, 00:06:05.849 "compare_and_write": false, 00:06:05.849 "abort": true, 00:06:05.849 "seek_hole": false, 00:06:05.849 "seek_data": false, 00:06:05.849 "copy": true, 00:06:05.849 "nvme_iov_md": false 00:06:05.849 }, 00:06:05.849 "memory_domains": [ 00:06:05.849 { 00:06:05.849 "dma_device_id": "system", 00:06:05.849 "dma_device_type": 1 00:06:05.849 }, 00:06:05.849 { 00:06:05.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.849 "dma_device_type": 2 00:06:05.849 } 00:06:05.849 ], 00:06:05.849 "driver_specific": {} 00:06:05.849 } 00:06:05.849 ]' 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.849 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 [2024-10-01 01:23:45.591027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.850 [2024-10-01 01:23:45.591087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.850 [2024-10-01 01:23:45.591115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x172ef90 00:06:05.850 [2024-10-01 01:23:45.591131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.850 [2024-10-01 01:23:45.592529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.850 [2024-10-01 01:23:45.592557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.850 Passthru0 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.850 { 00:06:05.850 "name": "Malloc2", 00:06:05.850 "aliases": [ 00:06:05.850 "f390b6c9-b4ab-4a65-ab8a-d948374459d7" 00:06:05.850 ], 00:06:05.850 "product_name": "Malloc disk", 00:06:05.850 "block_size": 512, 00:06:05.850 "num_blocks": 16384, 00:06:05.850 "uuid": "f390b6c9-b4ab-4a65-ab8a-d948374459d7", 00:06:05.850 "assigned_rate_limits": { 00:06:05.850 "rw_ios_per_sec": 0, 00:06:05.850 "rw_mbytes_per_sec": 0, 00:06:05.850 "r_mbytes_per_sec": 0, 00:06:05.850 "w_mbytes_per_sec": 0 00:06:05.850 }, 00:06:05.850 "claimed": true, 00:06:05.850 "claim_type": "exclusive_write", 00:06:05.850 "zoned": false, 00:06:05.850 "supported_io_types": { 00:06:05.850 "read": true, 00:06:05.850 "write": true, 00:06:05.850 "unmap": true, 00:06:05.850 "flush": true, 00:06:05.850 "reset": true, 00:06:05.850 "nvme_admin": false, 00:06:05.850 "nvme_io": false, 00:06:05.850 "nvme_io_md": false, 00:06:05.850 "write_zeroes": true, 00:06:05.850 "zcopy": true, 00:06:05.850 "get_zone_info": false, 00:06:05.850 "zone_management": false, 00:06:05.850 "zone_append": false, 00:06:05.850 "compare": false, 00:06:05.850 "compare_and_write": false, 00:06:05.850 "abort": true, 00:06:05.850 "seek_hole": false, 00:06:05.850 "seek_data": false, 00:06:05.850 "copy": true, 00:06:05.850 "nvme_iov_md": false 00:06:05.850 }, 00:06:05.850 "memory_domains": [ 00:06:05.850 { 00:06:05.850 "dma_device_id": "system", 00:06:05.850 "dma_device_type": 1 00:06:05.850 }, 00:06:05.850 { 00:06:05.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.850 "dma_device_type": 2 00:06:05.850 } 00:06:05.850 ], 00:06:05.850 "driver_specific": {} 00:06:05.850 }, 00:06:05.850 { 00:06:05.850 "name": "Passthru0", 00:06:05.850 "aliases": [ 00:06:05.850 "520387d4-450f-569a-9032-ae37ac8b0026" 00:06:05.850 ], 00:06:05.850 "product_name": "passthru", 00:06:05.850 "block_size": 512, 00:06:05.850 "num_blocks": 16384, 00:06:05.850 "uuid": "520387d4-450f-569a-9032-ae37ac8b0026", 00:06:05.850 "assigned_rate_limits": { 00:06:05.850 "rw_ios_per_sec": 0, 00:06:05.850 "rw_mbytes_per_sec": 0, 00:06:05.850 "r_mbytes_per_sec": 0, 00:06:05.850 "w_mbytes_per_sec": 0 00:06:05.850 }, 00:06:05.850 "claimed": false, 00:06:05.850 "zoned": false, 00:06:05.850 "supported_io_types": { 00:06:05.850 "read": true, 00:06:05.850 "write": true, 00:06:05.850 "unmap": true, 00:06:05.850 "flush": true, 00:06:05.850 "reset": true, 00:06:05.850 "nvme_admin": false, 00:06:05.850 "nvme_io": false, 00:06:05.850 "nvme_io_md": false, 00:06:05.850 "write_zeroes": true, 00:06:05.850 "zcopy": true, 00:06:05.850 "get_zone_info": false, 00:06:05.850 "zone_management": false, 00:06:05.850 "zone_append": false, 00:06:05.850 "compare": false, 00:06:05.850 "compare_and_write": false, 00:06:05.850 "abort": true, 00:06:05.850 "seek_hole": false, 00:06:05.850 "seek_data": false, 00:06:05.850 "copy": true, 00:06:05.850 "nvme_iov_md": false 00:06:05.850 }, 00:06:05.850 "memory_domains": [ 00:06:05.850 { 00:06:05.850 "dma_device_id": "system", 00:06:05.850 "dma_device_type": 1 00:06:05.850 }, 00:06:05.850 { 00:06:05.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.850 "dma_device_type": 2 00:06:05.850 } 00:06:05.850 ], 00:06:05.850 "driver_specific": { 00:06:05.850 "passthru": { 00:06:05.850 "name": "Passthru0", 00:06:05.850 "base_bdev_name": "Malloc2" 00:06:05.850 } 00:06:05.850 } 00:06:05.850 } 00:06:05.850 ]' 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.850 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.108 01:23:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.108 00:06:06.108 real 0m0.222s 00:06:06.108 user 0m0.151s 00:06:06.108 sys 0m0.020s 00:06:06.108 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.108 01:23:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.108 ************************************ 00:06:06.108 END TEST rpc_daemon_integrity 00:06:06.108 ************************************ 00:06:06.108 01:23:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.108 01:23:45 rpc -- rpc/rpc.sh@84 -- # killprocess 761942 00:06:06.108 01:23:45 rpc -- common/autotest_common.sh@950 -- # '[' -z 761942 ']' 00:06:06.108 01:23:45 rpc -- common/autotest_common.sh@954 -- # kill -0 761942 00:06:06.108 01:23:45 rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.108 01:23:45 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.108 01:23:45 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 761942 00:06:06.109 01:23:45 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.109 01:23:45 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.109 01:23:45 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 761942' 00:06:06.109 killing process with pid 761942 00:06:06.109 01:23:45 rpc -- common/autotest_common.sh@969 -- # kill 761942 00:06:06.109 01:23:45 rpc -- common/autotest_common.sh@974 -- # wait 761942 00:06:06.367 00:06:06.367 real 0m2.052s 00:06:06.367 user 0m2.545s 00:06:06.367 sys 0m0.624s 00:06:06.367 01:23:46 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.367 01:23:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.367 ************************************ 00:06:06.367 END TEST rpc 00:06:06.367 ************************************ 00:06:06.626 01:23:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.626 01:23:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.626 01:23:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.626 01:23:46 -- common/autotest_common.sh@10 -- # set +x 00:06:06.626 ************************************ 00:06:06.626 START TEST skip_rpc 00:06:06.626 ************************************ 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.626 * Looking for test storage... 00:06:06.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.626 01:23:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.626 --rc genhtml_branch_coverage=1 00:06:06.626 --rc genhtml_function_coverage=1 00:06:06.626 --rc genhtml_legend=1 00:06:06.626 --rc geninfo_all_blocks=1 00:06:06.626 --rc geninfo_unexecuted_blocks=1 00:06:06.626 00:06:06.626 ' 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.626 --rc genhtml_branch_coverage=1 00:06:06.626 --rc genhtml_function_coverage=1 00:06:06.626 --rc genhtml_legend=1 00:06:06.626 --rc geninfo_all_blocks=1 00:06:06.626 --rc geninfo_unexecuted_blocks=1 00:06:06.626 00:06:06.626 ' 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.626 --rc genhtml_branch_coverage=1 00:06:06.626 --rc genhtml_function_coverage=1 00:06:06.626 --rc genhtml_legend=1 00:06:06.626 --rc geninfo_all_blocks=1 00:06:06.626 --rc geninfo_unexecuted_blocks=1 00:06:06.626 00:06:06.626 ' 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.626 --rc genhtml_branch_coverage=1 00:06:06.626 --rc genhtml_function_coverage=1 00:06:06.626 --rc genhtml_legend=1 00:06:06.626 --rc geninfo_all_blocks=1 00:06:06.626 --rc geninfo_unexecuted_blocks=1 00:06:06.626 00:06:06.626 ' 00:06:06.626 01:23:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:06.626 01:23:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.626 01:23:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.626 01:23:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.626 ************************************ 00:06:06.626 START TEST skip_rpc 00:06:06.626 ************************************ 00:06:06.626 01:23:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:06.626 01:23:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=762386 00:06:06.626 01:23:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.626 01:23:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.626 01:23:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.626 [2024-10-01 01:23:46.476827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:06.626 [2024-10-01 01:23:46.476946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762386 ] 00:06:06.885 [2024-10-01 01:23:46.539934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.885 [2024-10-01 01:23:46.632458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 762386 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 762386 ']' 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 762386 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 762386 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 762386' 00:06:12.144 killing process with pid 762386 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 762386 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 762386 00:06:12.144 00:06:12.144 real 0m5.480s 00:06:12.144 user 0m5.143s 00:06:12.144 sys 0m0.346s 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.144 01:23:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.144 ************************************ 00:06:12.144 END TEST skip_rpc 00:06:12.144 ************************************ 00:06:12.144 01:23:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:12.144 01:23:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.144 01:23:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.144 01:23:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.144 ************************************ 00:06:12.144 START TEST skip_rpc_with_json 00:06:12.144 ************************************ 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=763020 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 763020 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 763020 ']' 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.144 01:23:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.402 [2024-10-01 01:23:52.005650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:12.402 [2024-10-01 01:23:52.005758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763020 ] 00:06:12.402 [2024-10-01 01:23:52.065611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.402 [2024-10-01 01:23:52.152685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.660 [2024-10-01 01:23:52.426393] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:12.660 request: 00:06:12.660 { 00:06:12.660 "trtype": "tcp", 00:06:12.660 "method": "nvmf_get_transports", 00:06:12.660 "req_id": 1 00:06:12.660 } 00:06:12.660 Got JSON-RPC error response 00:06:12.660 response: 00:06:12.660 { 00:06:12.660 "code": -19, 00:06:12.660 "message": "No such device" 00:06:12.660 } 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.660 [2024-10-01 01:23:52.434530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.660 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.918 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.918 01:23:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.918 { 00:06:12.918 "subsystems": [ 00:06:12.918 { 00:06:12.918 "subsystem": "fsdev", 00:06:12.918 "config": [ 00:06:12.918 { 00:06:12.918 "method": "fsdev_set_opts", 00:06:12.918 "params": { 00:06:12.918 "fsdev_io_pool_size": 65535, 00:06:12.918 "fsdev_io_cache_size": 256 00:06:12.918 } 00:06:12.918 } 00:06:12.918 ] 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "vfio_user_target", 00:06:12.918 "config": null 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "keyring", 00:06:12.918 "config": [] 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "iobuf", 00:06:12.918 "config": [ 00:06:12.918 { 00:06:12.918 "method": "iobuf_set_options", 00:06:12.918 "params": { 00:06:12.918 "small_pool_count": 8192, 00:06:12.918 "large_pool_count": 1024, 00:06:12.918 "small_bufsize": 8192, 00:06:12.918 "large_bufsize": 135168 00:06:12.918 } 00:06:12.918 } 00:06:12.918 ] 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "sock", 00:06:12.918 "config": [ 00:06:12.918 { 00:06:12.918 "method": "sock_set_default_impl", 00:06:12.918 "params": { 00:06:12.918 "impl_name": "posix" 00:06:12.918 } 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "method": "sock_impl_set_options", 00:06:12.918 "params": { 00:06:12.918 "impl_name": "ssl", 00:06:12.918 "recv_buf_size": 4096, 00:06:12.918 "send_buf_size": 4096, 00:06:12.918 "enable_recv_pipe": true, 00:06:12.918 "enable_quickack": false, 00:06:12.918 "enable_placement_id": 0, 00:06:12.918 "enable_zerocopy_send_server": true, 00:06:12.918 "enable_zerocopy_send_client": false, 00:06:12.918 "zerocopy_threshold": 0, 00:06:12.918 "tls_version": 0, 00:06:12.918 "enable_ktls": false 00:06:12.918 } 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "method": "sock_impl_set_options", 00:06:12.918 "params": { 00:06:12.918 "impl_name": "posix", 00:06:12.918 "recv_buf_size": 2097152, 00:06:12.918 "send_buf_size": 2097152, 00:06:12.918 "enable_recv_pipe": true, 00:06:12.918 "enable_quickack": false, 00:06:12.918 "enable_placement_id": 0, 00:06:12.918 "enable_zerocopy_send_server": true, 00:06:12.918 "enable_zerocopy_send_client": false, 00:06:12.918 "zerocopy_threshold": 0, 00:06:12.918 "tls_version": 0, 00:06:12.918 "enable_ktls": false 00:06:12.918 } 00:06:12.918 } 00:06:12.918 ] 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "vmd", 00:06:12.918 "config": [] 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "accel", 00:06:12.918 "config": [ 00:06:12.918 { 00:06:12.918 "method": "accel_set_options", 00:06:12.918 "params": { 00:06:12.918 "small_cache_size": 128, 00:06:12.918 "large_cache_size": 16, 00:06:12.918 "task_count": 2048, 00:06:12.918 "sequence_count": 2048, 00:06:12.918 "buf_count": 2048 00:06:12.918 } 00:06:12.918 } 00:06:12.918 ] 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "subsystem": "bdev", 00:06:12.918 "config": [ 00:06:12.918 { 00:06:12.918 "method": "bdev_set_options", 00:06:12.918 "params": { 00:06:12.918 "bdev_io_pool_size": 65535, 00:06:12.918 "bdev_io_cache_size": 256, 00:06:12.918 "bdev_auto_examine": true, 00:06:12.918 "iobuf_small_cache_size": 128, 00:06:12.918 "iobuf_large_cache_size": 16 00:06:12.918 } 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "method": "bdev_raid_set_options", 00:06:12.918 "params": { 00:06:12.918 "process_window_size_kb": 1024, 00:06:12.918 "process_max_bandwidth_mb_sec": 0 00:06:12.918 } 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "method": "bdev_iscsi_set_options", 00:06:12.918 "params": { 00:06:12.918 "timeout_sec": 30 00:06:12.918 } 00:06:12.918 }, 00:06:12.918 { 00:06:12.918 "method": "bdev_nvme_set_options", 00:06:12.918 "params": { 00:06:12.918 "action_on_timeout": "none", 00:06:12.918 "timeout_us": 0, 00:06:12.918 "timeout_admin_us": 0, 00:06:12.918 "keep_alive_timeout_ms": 10000, 00:06:12.918 "arbitration_burst": 0, 00:06:12.918 "low_priority_weight": 0, 00:06:12.918 "medium_priority_weight": 0, 00:06:12.918 "high_priority_weight": 0, 00:06:12.918 "nvme_adminq_poll_period_us": 10000, 00:06:12.918 "nvme_ioq_poll_period_us": 0, 00:06:12.918 "io_queue_requests": 0, 00:06:12.918 "delay_cmd_submit": true, 00:06:12.918 "transport_retry_count": 4, 00:06:12.918 "bdev_retry_count": 3, 00:06:12.918 "transport_ack_timeout": 0, 00:06:12.919 "ctrlr_loss_timeout_sec": 0, 00:06:12.919 "reconnect_delay_sec": 0, 00:06:12.919 "fast_io_fail_timeout_sec": 0, 00:06:12.919 "disable_auto_failback": false, 00:06:12.919 "generate_uuids": false, 00:06:12.919 "transport_tos": 0, 00:06:12.919 "nvme_error_stat": false, 00:06:12.919 "rdma_srq_size": 0, 00:06:12.919 "io_path_stat": false, 00:06:12.919 "allow_accel_sequence": false, 00:06:12.919 "rdma_max_cq_size": 0, 00:06:12.919 "rdma_cm_event_timeout_ms": 0, 00:06:12.919 "dhchap_digests": [ 00:06:12.919 "sha256", 00:06:12.919 "sha384", 00:06:12.919 "sha512" 00:06:12.919 ], 00:06:12.919 "dhchap_dhgroups": [ 00:06:12.919 "null", 00:06:12.919 "ffdhe2048", 00:06:12.919 "ffdhe3072", 00:06:12.919 "ffdhe4096", 00:06:12.919 "ffdhe6144", 00:06:12.919 "ffdhe8192" 00:06:12.919 ] 00:06:12.919 } 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "method": "bdev_nvme_set_hotplug", 00:06:12.919 "params": { 00:06:12.919 "period_us": 100000, 00:06:12.919 "enable": false 00:06:12.919 } 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "method": "bdev_wait_for_examine" 00:06:12.919 } 00:06:12.919 ] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "scsi", 00:06:12.919 "config": null 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "scheduler", 00:06:12.919 "config": [ 00:06:12.919 { 00:06:12.919 "method": "framework_set_scheduler", 00:06:12.919 "params": { 00:06:12.919 "name": "static" 00:06:12.919 } 00:06:12.919 } 00:06:12.919 ] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "vhost_scsi", 00:06:12.919 "config": [] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "vhost_blk", 00:06:12.919 "config": [] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "ublk", 00:06:12.919 "config": [] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "nbd", 00:06:12.919 "config": [] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "nvmf", 00:06:12.919 "config": [ 00:06:12.919 { 00:06:12.919 "method": "nvmf_set_config", 00:06:12.919 "params": { 00:06:12.919 "discovery_filter": "match_any", 00:06:12.919 "admin_cmd_passthru": { 00:06:12.919 "identify_ctrlr": false 00:06:12.919 }, 00:06:12.919 "dhchap_digests": [ 00:06:12.919 "sha256", 00:06:12.919 "sha384", 00:06:12.919 "sha512" 00:06:12.919 ], 00:06:12.919 "dhchap_dhgroups": [ 00:06:12.919 "null", 00:06:12.919 "ffdhe2048", 00:06:12.919 "ffdhe3072", 00:06:12.919 "ffdhe4096", 00:06:12.919 "ffdhe6144", 00:06:12.919 "ffdhe8192" 00:06:12.919 ] 00:06:12.919 } 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "method": "nvmf_set_max_subsystems", 00:06:12.919 "params": { 00:06:12.919 "max_subsystems": 1024 00:06:12.919 } 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "method": "nvmf_set_crdt", 00:06:12.919 "params": { 00:06:12.919 "crdt1": 0, 00:06:12.919 "crdt2": 0, 00:06:12.919 "crdt3": 0 00:06:12.919 } 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "method": "nvmf_create_transport", 00:06:12.919 "params": { 00:06:12.919 "trtype": "TCP", 00:06:12.919 "max_queue_depth": 128, 00:06:12.919 "max_io_qpairs_per_ctrlr": 127, 00:06:12.919 "in_capsule_data_size": 4096, 00:06:12.919 "max_io_size": 131072, 00:06:12.919 "io_unit_size": 131072, 00:06:12.919 "max_aq_depth": 128, 00:06:12.919 "num_shared_buffers": 511, 00:06:12.919 "buf_cache_size": 4294967295, 00:06:12.919 "dif_insert_or_strip": false, 00:06:12.919 "zcopy": false, 00:06:12.919 "c2h_success": true, 00:06:12.919 "sock_priority": 0, 00:06:12.919 "abort_timeout_sec": 1, 00:06:12.919 "ack_timeout": 0, 00:06:12.919 "data_wr_pool_size": 0 00:06:12.919 } 00:06:12.919 } 00:06:12.919 ] 00:06:12.919 }, 00:06:12.919 { 00:06:12.919 "subsystem": "iscsi", 00:06:12.919 "config": [ 00:06:12.919 { 00:06:12.919 "method": "iscsi_set_options", 00:06:12.919 "params": { 00:06:12.919 "node_base": "iqn.2016-06.io.spdk", 00:06:12.919 "max_sessions": 128, 00:06:12.919 "max_connections_per_session": 2, 00:06:12.919 "max_queue_depth": 64, 00:06:12.919 "default_time2wait": 2, 00:06:12.919 "default_time2retain": 20, 00:06:12.919 "first_burst_length": 8192, 00:06:12.919 "immediate_data": true, 00:06:12.919 "allow_duplicated_isid": false, 00:06:12.919 "error_recovery_level": 0, 00:06:12.919 "nop_timeout": 60, 00:06:12.919 "nop_in_interval": 30, 00:06:12.919 "disable_chap": false, 00:06:12.919 "require_chap": false, 00:06:12.919 "mutual_chap": false, 00:06:12.919 "chap_group": 0, 00:06:12.919 "max_large_datain_per_connection": 64, 00:06:12.919 "max_r2t_per_connection": 4, 00:06:12.919 "pdu_pool_size": 36864, 00:06:12.919 "immediate_data_pool_size": 16384, 00:06:12.919 "data_out_pool_size": 2048 00:06:12.919 } 00:06:12.919 } 00:06:12.919 ] 00:06:12.919 } 00:06:12.919 ] 00:06:12.919 } 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 763020 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 763020 ']' 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 763020 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 763020 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 763020' 00:06:12.919 killing process with pid 763020 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 763020 00:06:12.919 01:23:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 763020 00:06:13.486 01:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=763157 00:06:13.486 01:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:13.486 01:23:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 763157 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 763157 ']' 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 763157 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 763157 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 763157' 00:06:18.758 killing process with pid 763157 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 763157 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 763157 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.758 00:06:18.758 real 0m6.628s 00:06:18.758 user 0m6.240s 00:06:18.758 sys 0m0.735s 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.758 01:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 ************************************ 00:06:18.758 END TEST skip_rpc_with_json 00:06:18.758 ************************************ 00:06:18.758 01:23:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:18.758 01:23:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.758 01:23:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.758 01:23:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.018 ************************************ 00:06:19.018 START TEST skip_rpc_with_delay 00:06:19.018 ************************************ 00:06:19.018 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:19.018 01:23:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.019 [2024-10-01 01:23:58.689826] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.019 [2024-10-01 01:23:58.689948] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.019 00:06:19.019 real 0m0.076s 00:06:19.019 user 0m0.049s 00:06:19.019 sys 0m0.026s 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.019 01:23:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.019 ************************************ 00:06:19.019 END TEST skip_rpc_with_delay 00:06:19.019 ************************************ 00:06:19.019 01:23:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.019 01:23:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.019 01:23:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.019 01:23:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.019 01:23:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.019 01:23:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.019 ************************************ 00:06:19.019 START TEST exit_on_failed_rpc_init 00:06:19.019 ************************************ 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=763895 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 763895 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 763895 ']' 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.019 01:23:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.019 [2024-10-01 01:23:58.809120] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:19.019 [2024-10-01 01:23:58.809224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763895 ] 00:06:19.019 [2024-10-01 01:23:58.870142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.333 [2024-10-01 01:23:58.960497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.629 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.629 [2024-10-01 01:23:59.284760] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:19.629 [2024-10-01 01:23:59.284860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763947 ] 00:06:19.629 [2024-10-01 01:23:59.349418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.629 [2024-10-01 01:23:59.443735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.629 [2024-10-01 01:23:59.443893] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.629 [2024-10-01 01:23:59.443922] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.629 [2024-10-01 01:23:59.443937] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 763895 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 763895 ']' 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 763895 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 763895 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 763895' 00:06:19.888 killing process with pid 763895 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 763895 00:06:19.888 01:23:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 763895 00:06:20.455 00:06:20.455 real 0m1.253s 00:06:20.455 user 0m1.388s 00:06:20.455 sys 0m0.455s 00:06:20.455 01:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.455 01:24:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.455 ************************************ 00:06:20.455 END TEST exit_on_failed_rpc_init 00:06:20.455 ************************************ 00:06:20.455 01:24:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.455 00:06:20.455 real 0m13.776s 00:06:20.455 user 0m12.995s 00:06:20.455 sys 0m1.747s 00:06:20.455 01:24:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.455 01:24:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.455 ************************************ 00:06:20.455 END TEST skip_rpc 00:06:20.455 ************************************ 00:06:20.455 01:24:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.455 01:24:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.455 01:24:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.455 01:24:00 -- common/autotest_common.sh@10 -- # set +x 00:06:20.455 ************************************ 00:06:20.455 START TEST rpc_client 00:06:20.455 ************************************ 00:06:20.455 01:24:00 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.455 * Looking for test storage... 00:06:20.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.455 01:24:00 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.455 01:24:00 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.455 01:24:00 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.455 01:24:00 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:20.455 01:24:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:20.456 01:24:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.456 01:24:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:20.456 01:24:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.456 01:24:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.456 01:24:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.456 01:24:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.456 --rc genhtml_branch_coverage=1 00:06:20.456 --rc genhtml_function_coverage=1 00:06:20.456 --rc genhtml_legend=1 00:06:20.456 --rc geninfo_all_blocks=1 00:06:20.456 --rc geninfo_unexecuted_blocks=1 00:06:20.456 00:06:20.456 ' 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.456 --rc genhtml_branch_coverage=1 00:06:20.456 --rc genhtml_function_coverage=1 00:06:20.456 --rc genhtml_legend=1 00:06:20.456 --rc geninfo_all_blocks=1 00:06:20.456 --rc geninfo_unexecuted_blocks=1 00:06:20.456 00:06:20.456 ' 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.456 --rc genhtml_branch_coverage=1 00:06:20.456 --rc genhtml_function_coverage=1 00:06:20.456 --rc genhtml_legend=1 00:06:20.456 --rc geninfo_all_blocks=1 00:06:20.456 --rc geninfo_unexecuted_blocks=1 00:06:20.456 00:06:20.456 ' 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.456 --rc genhtml_branch_coverage=1 00:06:20.456 --rc genhtml_function_coverage=1 00:06:20.456 --rc genhtml_legend=1 00:06:20.456 --rc geninfo_all_blocks=1 00:06:20.456 --rc geninfo_unexecuted_blocks=1 00:06:20.456 00:06:20.456 ' 00:06:20.456 01:24:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.456 OK 00:06:20.456 01:24:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.456 00:06:20.456 real 0m0.152s 00:06:20.456 user 0m0.112s 00:06:20.456 sys 0m0.048s 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.456 01:24:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.456 ************************************ 00:06:20.456 END TEST rpc_client 00:06:20.456 ************************************ 00:06:20.456 01:24:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.456 01:24:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.456 01:24:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.456 01:24:00 -- common/autotest_common.sh@10 -- # set +x 00:06:20.456 ************************************ 00:06:20.456 START TEST json_config 00:06:20.456 ************************************ 00:06:20.456 01:24:00 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.456 01:24:00 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.456 01:24:00 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.456 01:24:00 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.714 01:24:00 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.714 01:24:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.714 01:24:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.714 01:24:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.714 01:24:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.714 01:24:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.714 01:24:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:20.714 01:24:00 json_config -- scripts/common.sh@345 -- # : 1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.714 01:24:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.714 01:24:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@353 -- # local d=1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.714 01:24:00 json_config -- scripts/common.sh@355 -- # echo 1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.714 01:24:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@353 -- # local d=2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.714 01:24:00 json_config -- scripts/common.sh@355 -- # echo 2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.714 01:24:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.714 01:24:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.714 01:24:00 json_config -- scripts/common.sh@368 -- # return 0 00:06:20.714 01:24:00 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.714 01:24:00 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.714 --rc genhtml_branch_coverage=1 00:06:20.714 --rc genhtml_function_coverage=1 00:06:20.714 --rc genhtml_legend=1 00:06:20.714 --rc geninfo_all_blocks=1 00:06:20.714 --rc geninfo_unexecuted_blocks=1 00:06:20.714 00:06:20.714 ' 00:06:20.714 01:24:00 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.714 --rc genhtml_branch_coverage=1 00:06:20.714 --rc genhtml_function_coverage=1 00:06:20.714 --rc genhtml_legend=1 00:06:20.714 --rc geninfo_all_blocks=1 00:06:20.714 --rc geninfo_unexecuted_blocks=1 00:06:20.714 00:06:20.714 ' 00:06:20.714 01:24:00 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.714 --rc genhtml_branch_coverage=1 00:06:20.714 --rc genhtml_function_coverage=1 00:06:20.714 --rc genhtml_legend=1 00:06:20.714 --rc geninfo_all_blocks=1 00:06:20.714 --rc geninfo_unexecuted_blocks=1 00:06:20.714 00:06:20.714 ' 00:06:20.714 01:24:00 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.714 --rc genhtml_branch_coverage=1 00:06:20.714 --rc genhtml_function_coverage=1 00:06:20.714 --rc genhtml_legend=1 00:06:20.714 --rc geninfo_all_blocks=1 00:06:20.714 --rc geninfo_unexecuted_blocks=1 00:06:20.714 00:06:20.714 ' 00:06:20.714 01:24:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.714 01:24:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.715 01:24:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.715 01:24:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.715 01:24:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.715 01:24:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.715 01:24:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.715 01:24:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.715 01:24:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.715 01:24:00 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.715 01:24:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@51 -- # : 0 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.715 01:24:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:20.715 INFO: JSON configuration test init 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.715 01:24:00 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.715 01:24:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.715 01:24:00 json_config -- json_config/common.sh@10 -- # shift 00:06:20.715 01:24:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.715 01:24:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.715 01:24:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.715 01:24:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.715 01:24:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.715 01:24:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=764244 00:06:20.715 01:24:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.715 01:24:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.715 Waiting for target to run... 00:06:20.715 01:24:00 json_config -- json_config/common.sh@25 -- # waitforlisten 764244 /var/tmp/spdk_tgt.sock 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@831 -- # '[' -z 764244 ']' 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.715 01:24:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.715 [2024-10-01 01:24:00.472709] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:20.715 [2024-10-01 01:24:00.472800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764244 ] 00:06:21.281 [2024-10-01 01:24:00.986976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.281 [2024-10-01 01:24:01.063805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.846 01:24:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.846 01:24:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:21.846 01:24:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.846 00:06:21.846 01:24:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:21.846 01:24:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:21.846 01:24:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.846 01:24:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.846 01:24:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:21.846 01:24:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:21.846 01:24:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.846 01:24:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.846 01:24:01 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:21.846 01:24:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:21.846 01:24:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:25.129 01:24:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.129 01:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:25.129 01:24:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@54 -- # sort 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:25.129 01:24:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:25.129 01:24:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.129 01:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:25.388 01:24:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:25.388 01:24:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.388 01:24:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.388 01:24:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:25.388 01:24:05 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:25.388 01:24:05 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:25.388 01:24:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.388 01:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.646 MallocForNvmf0 00:06:25.646 01:24:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.646 01:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.905 MallocForNvmf1 00:06:25.905 01:24:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.905 01:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:26.161 [2024-10-01 01:24:05.794135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.161 01:24:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.161 01:24:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.419 01:24:06 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.419 01:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.675 01:24:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.675 01:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.931 01:24:06 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.931 01:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:27.189 [2024-10-01 01:24:06.885827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:27.189 01:24:06 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:27.189 01:24:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.189 01:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.189 01:24:06 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:27.189 01:24:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.189 01:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.189 01:24:06 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:27.189 01:24:06 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.189 01:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.447 MallocBdevForConfigChangeCheck 00:06:27.447 01:24:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:27.447 01:24:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.447 01:24:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.447 01:24:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:27.447 01:24:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.013 01:24:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:28.013 INFO: shutting down applications... 00:06:28.013 01:24:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:28.013 01:24:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:28.013 01:24:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:28.013 01:24:07 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:29.912 Calling clear_iscsi_subsystem 00:06:29.912 Calling clear_nvmf_subsystem 00:06:29.912 Calling clear_nbd_subsystem 00:06:29.912 Calling clear_ublk_subsystem 00:06:29.912 Calling clear_vhost_blk_subsystem 00:06:29.912 Calling clear_vhost_scsi_subsystem 00:06:29.912 Calling clear_bdev_subsystem 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@352 -- # break 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:29.912 01:24:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:29.912 01:24:09 json_config -- json_config/common.sh@31 -- # local app=target 00:06:29.912 01:24:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:29.912 01:24:09 json_config -- json_config/common.sh@35 -- # [[ -n 764244 ]] 00:06:29.912 01:24:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 764244 00:06:29.912 01:24:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:29.912 01:24:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.912 01:24:09 json_config -- json_config/common.sh@41 -- # kill -0 764244 00:06:29.912 01:24:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.479 01:24:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.479 01:24:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.479 01:24:10 json_config -- json_config/common.sh@41 -- # kill -0 764244 00:06:30.479 01:24:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.479 01:24:10 json_config -- json_config/common.sh@43 -- # break 00:06:30.479 01:24:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.479 01:24:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.479 SPDK target shutdown done 00:06:30.479 01:24:10 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:30.479 INFO: relaunching applications... 00:06:30.479 01:24:10 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.479 01:24:10 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.479 01:24:10 json_config -- json_config/common.sh@10 -- # shift 00:06:30.479 01:24:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.479 01:24:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.479 01:24:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.479 01:24:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.479 01:24:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.479 01:24:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=766143 00:06:30.479 01:24:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.479 01:24:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.479 Waiting for target to run... 00:06:30.479 01:24:10 json_config -- json_config/common.sh@25 -- # waitforlisten 766143 /var/tmp/spdk_tgt.sock 00:06:30.479 01:24:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 766143 ']' 00:06:30.479 01:24:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.479 01:24:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.479 01:24:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.479 01:24:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.479 01:24:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.479 [2024-10-01 01:24:10.265903] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:30.479 [2024-10-01 01:24:10.266013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766143 ] 00:06:31.046 [2024-10-01 01:24:10.822111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.305 [2024-10-01 01:24:10.904556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.586 [2024-10-01 01:24:13.958586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.586 [2024-10-01 01:24:13.991099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:35.151 01:24:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.151 01:24:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:35.151 01:24:14 json_config -- json_config/common.sh@26 -- # echo '' 00:06:35.151 00:06:35.151 01:24:14 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:35.151 01:24:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:35.151 INFO: Checking if target configuration is the same... 00:06:35.151 01:24:14 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.151 01:24:14 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:35.151 01:24:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:35.151 + '[' 2 -ne 2 ']' 00:06:35.151 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:35.151 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:35.151 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:35.151 +++ basename /dev/fd/62 00:06:35.151 ++ mktemp /tmp/62.XXX 00:06:35.151 + tmp_file_1=/tmp/62.f62 00:06:35.151 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.151 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:35.151 + tmp_file_2=/tmp/spdk_tgt_config.json.DDv 00:06:35.151 + ret=0 00:06:35.151 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.407 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:35.407 + diff -u /tmp/62.f62 /tmp/spdk_tgt_config.json.DDv 00:06:35.407 + echo 'INFO: JSON config files are the same' 00:06:35.407 INFO: JSON config files are the same 00:06:35.407 + rm /tmp/62.f62 /tmp/spdk_tgt_config.json.DDv 00:06:35.407 + exit 0 00:06:35.407 01:24:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:35.407 01:24:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:35.407 INFO: changing configuration and checking if this can be detected... 00:06:35.407 01:24:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:35.407 01:24:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:35.973 01:24:15 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.973 01:24:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:35.973 01:24:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:35.973 + '[' 2 -ne 2 ']' 00:06:35.973 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:35.973 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:35.973 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:35.973 +++ basename /dev/fd/62 00:06:35.974 ++ mktemp /tmp/62.XXX 00:06:35.974 + tmp_file_1=/tmp/62.xvj 00:06:35.974 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.974 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:35.974 + tmp_file_2=/tmp/spdk_tgt_config.json.yED 00:06:35.974 + ret=0 00:06:35.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.232 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.232 + diff -u /tmp/62.xvj /tmp/spdk_tgt_config.json.yED 00:06:36.232 + ret=1 00:06:36.232 + echo '=== Start of file: /tmp/62.xvj ===' 00:06:36.232 + cat /tmp/62.xvj 00:06:36.232 + echo '=== End of file: /tmp/62.xvj ===' 00:06:36.232 + echo '' 00:06:36.232 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yED ===' 00:06:36.232 + cat /tmp/spdk_tgt_config.json.yED 00:06:36.232 + echo '=== End of file: /tmp/spdk_tgt_config.json.yED ===' 00:06:36.232 + echo '' 00:06:36.232 + rm /tmp/62.xvj /tmp/spdk_tgt_config.json.yED 00:06:36.232 + exit 1 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:36.232 INFO: configuration change detected. 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 766143 ]] 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.232 01:24:16 json_config -- json_config/json_config.sh@330 -- # killprocess 766143 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@950 -- # '[' -z 766143 ']' 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@954 -- # kill -0 766143 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@955 -- # uname 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 766143 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 766143' 00:06:36.232 killing process with pid 766143 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@969 -- # kill 766143 00:06:36.232 01:24:16 json_config -- common/autotest_common.sh@974 -- # wait 766143 00:06:38.133 01:24:17 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.133 01:24:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:38.133 01:24:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.133 01:24:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.133 01:24:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:38.133 01:24:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:38.133 INFO: Success 00:06:38.133 00:06:38.133 real 0m17.486s 00:06:38.133 user 0m19.792s 00:06:38.133 sys 0m2.274s 00:06:38.133 01:24:17 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.133 01:24:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.133 ************************************ 00:06:38.133 END TEST json_config 00:06:38.133 ************************************ 00:06:38.133 01:24:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:38.133 01:24:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.133 01:24:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.133 01:24:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.133 ************************************ 00:06:38.133 START TEST json_config_extra_key 00:06:38.133 ************************************ 00:06:38.133 01:24:17 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:38.133 01:24:17 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.133 01:24:17 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.133 01:24:17 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.133 01:24:17 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.133 01:24:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.133 01:24:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.133 01:24:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.134 --rc geninfo_unexecuted_blocks=1 00:06:38.134 00:06:38.134 ' 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.134 --rc geninfo_unexecuted_blocks=1 00:06:38.134 00:06:38.134 ' 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.134 --rc geninfo_unexecuted_blocks=1 00:06:38.134 00:06:38.134 ' 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.134 --rc geninfo_unexecuted_blocks=1 00:06:38.134 00:06:38.134 ' 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.134 01:24:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.134 01:24:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.134 01:24:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.134 01:24:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.134 01:24:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:38.134 01:24:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.134 01:24:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:38.134 INFO: launching applications... 00:06:38.134 01:24:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=767199 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.134 Waiting for target to run... 00:06:38.134 01:24:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 767199 /var/tmp/spdk_tgt.sock 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 767199 ']' 00:06:38.134 01:24:17 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.135 01:24:17 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.135 01:24:17 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.135 01:24:17 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.135 01:24:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:38.393 [2024-10-01 01:24:18.004396] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:38.393 [2024-10-01 01:24:18.004491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767199 ] 00:06:38.651 [2024-10-01 01:24:18.335910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.651 [2024-10-01 01:24:18.399368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.218 01:24:18 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.218 01:24:18 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:39.218 00:06:39.218 01:24:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:39.218 INFO: shutting down applications... 00:06:39.218 01:24:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 767199 ]] 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 767199 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 767199 00:06:39.218 01:24:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 767199 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:39.783 01:24:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:39.783 SPDK target shutdown done 00:06:39.783 01:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:39.783 Success 00:06:39.783 00:06:39.783 real 0m1.673s 00:06:39.783 user 0m1.673s 00:06:39.783 sys 0m0.459s 00:06:39.783 01:24:19 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.783 01:24:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:39.783 ************************************ 00:06:39.783 END TEST json_config_extra_key 00:06:39.783 ************************************ 00:06:39.783 01:24:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:39.783 01:24:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.783 01:24:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.783 01:24:19 -- common/autotest_common.sh@10 -- # set +x 00:06:39.783 ************************************ 00:06:39.783 START TEST alias_rpc 00:06:39.783 ************************************ 00:06:39.783 01:24:19 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:39.783 * Looking for test storage... 00:06:39.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:39.783 01:24:19 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.783 01:24:19 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.783 01:24:19 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.042 01:24:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:40.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.042 --rc genhtml_branch_coverage=1 00:06:40.042 --rc genhtml_function_coverage=1 00:06:40.042 --rc genhtml_legend=1 00:06:40.042 --rc geninfo_all_blocks=1 00:06:40.042 --rc geninfo_unexecuted_blocks=1 00:06:40.042 00:06:40.042 ' 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:40.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.042 --rc genhtml_branch_coverage=1 00:06:40.042 --rc genhtml_function_coverage=1 00:06:40.042 --rc genhtml_legend=1 00:06:40.042 --rc geninfo_all_blocks=1 00:06:40.042 --rc geninfo_unexecuted_blocks=1 00:06:40.042 00:06:40.042 ' 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:40.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.042 --rc genhtml_branch_coverage=1 00:06:40.042 --rc genhtml_function_coverage=1 00:06:40.042 --rc genhtml_legend=1 00:06:40.042 --rc geninfo_all_blocks=1 00:06:40.042 --rc geninfo_unexecuted_blocks=1 00:06:40.042 00:06:40.042 ' 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:40.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.042 --rc genhtml_branch_coverage=1 00:06:40.042 --rc genhtml_function_coverage=1 00:06:40.042 --rc genhtml_legend=1 00:06:40.042 --rc geninfo_all_blocks=1 00:06:40.042 --rc geninfo_unexecuted_blocks=1 00:06:40.042 00:06:40.042 ' 00:06:40.042 01:24:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.042 01:24:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=767394 00:06:40.042 01:24:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.042 01:24:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 767394 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 767394 ']' 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.042 01:24:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.042 [2024-10-01 01:24:19.734186] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:40.042 [2024-10-01 01:24:19.734273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767394 ] 00:06:40.042 [2024-10-01 01:24:19.794104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.042 [2024-10-01 01:24:19.882131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.609 01:24:20 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.609 01:24:20 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.609 01:24:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:40.867 01:24:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 767394 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 767394 ']' 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 767394 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 767394 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.867 01:24:20 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.868 01:24:20 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 767394' 00:06:40.868 killing process with pid 767394 00:06:40.868 01:24:20 alias_rpc -- common/autotest_common.sh@969 -- # kill 767394 00:06:40.868 01:24:20 alias_rpc -- common/autotest_common.sh@974 -- # wait 767394 00:06:41.126 00:06:41.126 real 0m1.400s 00:06:41.126 user 0m1.479s 00:06:41.126 sys 0m0.490s 00:06:41.126 01:24:20 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.126 01:24:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.126 ************************************ 00:06:41.126 END TEST alias_rpc 00:06:41.126 ************************************ 00:06:41.126 01:24:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:41.126 01:24:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:41.126 01:24:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.126 01:24:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.126 01:24:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.384 ************************************ 00:06:41.384 START TEST spdkcli_tcp 00:06:41.384 ************************************ 00:06:41.384 01:24:20 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:41.384 * Looking for test storage... 00:06:41.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.384 01:24:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.384 --rc genhtml_branch_coverage=1 00:06:41.384 --rc genhtml_function_coverage=1 00:06:41.384 --rc genhtml_legend=1 00:06:41.384 --rc geninfo_all_blocks=1 00:06:41.384 --rc geninfo_unexecuted_blocks=1 00:06:41.384 00:06:41.384 ' 00:06:41.384 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.385 --rc genhtml_branch_coverage=1 00:06:41.385 --rc genhtml_function_coverage=1 00:06:41.385 --rc genhtml_legend=1 00:06:41.385 --rc geninfo_all_blocks=1 00:06:41.385 --rc geninfo_unexecuted_blocks=1 00:06:41.385 00:06:41.385 ' 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=767600 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:41.385 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 767600 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 767600 ']' 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.385 01:24:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.385 [2024-10-01 01:24:21.173945] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:41.385 [2024-10-01 01:24:21.174100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767600 ] 00:06:41.385 [2024-10-01 01:24:21.232597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.643 [2024-10-01 01:24:21.323348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.643 [2024-10-01 01:24:21.323368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.901 01:24:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.902 01:24:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:41.902 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=767719 00:06:41.902 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:41.902 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:42.160 [ 00:06:42.160 "bdev_malloc_delete", 00:06:42.160 "bdev_malloc_create", 00:06:42.160 "bdev_null_resize", 00:06:42.160 "bdev_null_delete", 00:06:42.160 "bdev_null_create", 00:06:42.160 "bdev_nvme_cuse_unregister", 00:06:42.160 "bdev_nvme_cuse_register", 00:06:42.160 "bdev_opal_new_user", 00:06:42.160 "bdev_opal_set_lock_state", 00:06:42.160 "bdev_opal_delete", 00:06:42.160 "bdev_opal_get_info", 00:06:42.160 "bdev_opal_create", 00:06:42.160 "bdev_nvme_opal_revert", 00:06:42.160 "bdev_nvme_opal_init", 00:06:42.160 "bdev_nvme_send_cmd", 00:06:42.160 "bdev_nvme_set_keys", 00:06:42.160 "bdev_nvme_get_path_iostat", 00:06:42.160 "bdev_nvme_get_mdns_discovery_info", 00:06:42.160 "bdev_nvme_stop_mdns_discovery", 00:06:42.160 "bdev_nvme_start_mdns_discovery", 00:06:42.160 "bdev_nvme_set_multipath_policy", 00:06:42.160 "bdev_nvme_set_preferred_path", 00:06:42.160 "bdev_nvme_get_io_paths", 00:06:42.160 "bdev_nvme_remove_error_injection", 00:06:42.160 "bdev_nvme_add_error_injection", 00:06:42.160 "bdev_nvme_get_discovery_info", 00:06:42.160 "bdev_nvme_stop_discovery", 00:06:42.160 "bdev_nvme_start_discovery", 00:06:42.160 "bdev_nvme_get_controller_health_info", 00:06:42.160 "bdev_nvme_disable_controller", 00:06:42.160 "bdev_nvme_enable_controller", 00:06:42.160 "bdev_nvme_reset_controller", 00:06:42.160 "bdev_nvme_get_transport_statistics", 00:06:42.160 "bdev_nvme_apply_firmware", 00:06:42.160 "bdev_nvme_detach_controller", 00:06:42.160 "bdev_nvme_get_controllers", 00:06:42.160 "bdev_nvme_attach_controller", 00:06:42.160 "bdev_nvme_set_hotplug", 00:06:42.160 "bdev_nvme_set_options", 00:06:42.160 "bdev_passthru_delete", 00:06:42.160 "bdev_passthru_create", 00:06:42.160 "bdev_lvol_set_parent_bdev", 00:06:42.160 "bdev_lvol_set_parent", 00:06:42.160 "bdev_lvol_check_shallow_copy", 00:06:42.160 "bdev_lvol_start_shallow_copy", 00:06:42.160 "bdev_lvol_grow_lvstore", 00:06:42.160 "bdev_lvol_get_lvols", 00:06:42.160 "bdev_lvol_get_lvstores", 00:06:42.160 "bdev_lvol_delete", 00:06:42.160 "bdev_lvol_set_read_only", 00:06:42.160 "bdev_lvol_resize", 00:06:42.160 "bdev_lvol_decouple_parent", 00:06:42.160 "bdev_lvol_inflate", 00:06:42.160 "bdev_lvol_rename", 00:06:42.160 "bdev_lvol_clone_bdev", 00:06:42.160 "bdev_lvol_clone", 00:06:42.160 "bdev_lvol_snapshot", 00:06:42.160 "bdev_lvol_create", 00:06:42.160 "bdev_lvol_delete_lvstore", 00:06:42.160 "bdev_lvol_rename_lvstore", 00:06:42.160 "bdev_lvol_create_lvstore", 00:06:42.160 "bdev_raid_set_options", 00:06:42.160 "bdev_raid_remove_base_bdev", 00:06:42.160 "bdev_raid_add_base_bdev", 00:06:42.160 "bdev_raid_delete", 00:06:42.160 "bdev_raid_create", 00:06:42.160 "bdev_raid_get_bdevs", 00:06:42.160 "bdev_error_inject_error", 00:06:42.160 "bdev_error_delete", 00:06:42.160 "bdev_error_create", 00:06:42.160 "bdev_split_delete", 00:06:42.160 "bdev_split_create", 00:06:42.160 "bdev_delay_delete", 00:06:42.160 "bdev_delay_create", 00:06:42.160 "bdev_delay_update_latency", 00:06:42.160 "bdev_zone_block_delete", 00:06:42.160 "bdev_zone_block_create", 00:06:42.160 "blobfs_create", 00:06:42.160 "blobfs_detect", 00:06:42.160 "blobfs_set_cache_size", 00:06:42.160 "bdev_aio_delete", 00:06:42.160 "bdev_aio_rescan", 00:06:42.160 "bdev_aio_create", 00:06:42.160 "bdev_ftl_set_property", 00:06:42.160 "bdev_ftl_get_properties", 00:06:42.160 "bdev_ftl_get_stats", 00:06:42.160 "bdev_ftl_unmap", 00:06:42.160 "bdev_ftl_unload", 00:06:42.160 "bdev_ftl_delete", 00:06:42.160 "bdev_ftl_load", 00:06:42.160 "bdev_ftl_create", 00:06:42.160 "bdev_virtio_attach_controller", 00:06:42.160 "bdev_virtio_scsi_get_devices", 00:06:42.160 "bdev_virtio_detach_controller", 00:06:42.160 "bdev_virtio_blk_set_hotplug", 00:06:42.160 "bdev_iscsi_delete", 00:06:42.160 "bdev_iscsi_create", 00:06:42.160 "bdev_iscsi_set_options", 00:06:42.160 "accel_error_inject_error", 00:06:42.160 "ioat_scan_accel_module", 00:06:42.160 "dsa_scan_accel_module", 00:06:42.160 "iaa_scan_accel_module", 00:06:42.160 "vfu_virtio_create_fs_endpoint", 00:06:42.160 "vfu_virtio_create_scsi_endpoint", 00:06:42.160 "vfu_virtio_scsi_remove_target", 00:06:42.160 "vfu_virtio_scsi_add_target", 00:06:42.160 "vfu_virtio_create_blk_endpoint", 00:06:42.160 "vfu_virtio_delete_endpoint", 00:06:42.160 "keyring_file_remove_key", 00:06:42.160 "keyring_file_add_key", 00:06:42.160 "keyring_linux_set_options", 00:06:42.160 "fsdev_aio_delete", 00:06:42.160 "fsdev_aio_create", 00:06:42.160 "iscsi_get_histogram", 00:06:42.160 "iscsi_enable_histogram", 00:06:42.160 "iscsi_set_options", 00:06:42.160 "iscsi_get_auth_groups", 00:06:42.161 "iscsi_auth_group_remove_secret", 00:06:42.161 "iscsi_auth_group_add_secret", 00:06:42.161 "iscsi_delete_auth_group", 00:06:42.161 "iscsi_create_auth_group", 00:06:42.161 "iscsi_set_discovery_auth", 00:06:42.161 "iscsi_get_options", 00:06:42.161 "iscsi_target_node_request_logout", 00:06:42.161 "iscsi_target_node_set_redirect", 00:06:42.161 "iscsi_target_node_set_auth", 00:06:42.161 "iscsi_target_node_add_lun", 00:06:42.161 "iscsi_get_stats", 00:06:42.161 "iscsi_get_connections", 00:06:42.161 "iscsi_portal_group_set_auth", 00:06:42.161 "iscsi_start_portal_group", 00:06:42.161 "iscsi_delete_portal_group", 00:06:42.161 "iscsi_create_portal_group", 00:06:42.161 "iscsi_get_portal_groups", 00:06:42.161 "iscsi_delete_target_node", 00:06:42.161 "iscsi_target_node_remove_pg_ig_maps", 00:06:42.161 "iscsi_target_node_add_pg_ig_maps", 00:06:42.161 "iscsi_create_target_node", 00:06:42.161 "iscsi_get_target_nodes", 00:06:42.161 "iscsi_delete_initiator_group", 00:06:42.161 "iscsi_initiator_group_remove_initiators", 00:06:42.161 "iscsi_initiator_group_add_initiators", 00:06:42.161 "iscsi_create_initiator_group", 00:06:42.161 "iscsi_get_initiator_groups", 00:06:42.161 "nvmf_set_crdt", 00:06:42.161 "nvmf_set_config", 00:06:42.161 "nvmf_set_max_subsystems", 00:06:42.161 "nvmf_stop_mdns_prr", 00:06:42.161 "nvmf_publish_mdns_prr", 00:06:42.161 "nvmf_subsystem_get_listeners", 00:06:42.161 "nvmf_subsystem_get_qpairs", 00:06:42.161 "nvmf_subsystem_get_controllers", 00:06:42.161 "nvmf_get_stats", 00:06:42.161 "nvmf_get_transports", 00:06:42.161 "nvmf_create_transport", 00:06:42.161 "nvmf_get_targets", 00:06:42.161 "nvmf_delete_target", 00:06:42.161 "nvmf_create_target", 00:06:42.161 "nvmf_subsystem_allow_any_host", 00:06:42.161 "nvmf_subsystem_set_keys", 00:06:42.161 "nvmf_subsystem_remove_host", 00:06:42.161 "nvmf_subsystem_add_host", 00:06:42.161 "nvmf_ns_remove_host", 00:06:42.161 "nvmf_ns_add_host", 00:06:42.161 "nvmf_subsystem_remove_ns", 00:06:42.161 "nvmf_subsystem_set_ns_ana_group", 00:06:42.161 "nvmf_subsystem_add_ns", 00:06:42.161 "nvmf_subsystem_listener_set_ana_state", 00:06:42.161 "nvmf_discovery_get_referrals", 00:06:42.161 "nvmf_discovery_remove_referral", 00:06:42.161 "nvmf_discovery_add_referral", 00:06:42.161 "nvmf_subsystem_remove_listener", 00:06:42.161 "nvmf_subsystem_add_listener", 00:06:42.161 "nvmf_delete_subsystem", 00:06:42.161 "nvmf_create_subsystem", 00:06:42.161 "nvmf_get_subsystems", 00:06:42.161 "env_dpdk_get_mem_stats", 00:06:42.161 "nbd_get_disks", 00:06:42.161 "nbd_stop_disk", 00:06:42.161 "nbd_start_disk", 00:06:42.161 "ublk_recover_disk", 00:06:42.161 "ublk_get_disks", 00:06:42.161 "ublk_stop_disk", 00:06:42.161 "ublk_start_disk", 00:06:42.161 "ublk_destroy_target", 00:06:42.161 "ublk_create_target", 00:06:42.161 "virtio_blk_create_transport", 00:06:42.161 "virtio_blk_get_transports", 00:06:42.161 "vhost_controller_set_coalescing", 00:06:42.161 "vhost_get_controllers", 00:06:42.161 "vhost_delete_controller", 00:06:42.161 "vhost_create_blk_controller", 00:06:42.161 "vhost_scsi_controller_remove_target", 00:06:42.161 "vhost_scsi_controller_add_target", 00:06:42.161 "vhost_start_scsi_controller", 00:06:42.161 "vhost_create_scsi_controller", 00:06:42.161 "thread_set_cpumask", 00:06:42.161 "scheduler_set_options", 00:06:42.161 "framework_get_governor", 00:06:42.161 "framework_get_scheduler", 00:06:42.161 "framework_set_scheduler", 00:06:42.161 "framework_get_reactors", 00:06:42.161 "thread_get_io_channels", 00:06:42.161 "thread_get_pollers", 00:06:42.161 "thread_get_stats", 00:06:42.161 "framework_monitor_context_switch", 00:06:42.161 "spdk_kill_instance", 00:06:42.161 "log_enable_timestamps", 00:06:42.161 "log_get_flags", 00:06:42.161 "log_clear_flag", 00:06:42.161 "log_set_flag", 00:06:42.161 "log_get_level", 00:06:42.161 "log_set_level", 00:06:42.161 "log_get_print_level", 00:06:42.161 "log_set_print_level", 00:06:42.161 "framework_enable_cpumask_locks", 00:06:42.161 "framework_disable_cpumask_locks", 00:06:42.161 "framework_wait_init", 00:06:42.161 "framework_start_init", 00:06:42.161 "scsi_get_devices", 00:06:42.161 "bdev_get_histogram", 00:06:42.161 "bdev_enable_histogram", 00:06:42.161 "bdev_set_qos_limit", 00:06:42.161 "bdev_set_qd_sampling_period", 00:06:42.161 "bdev_get_bdevs", 00:06:42.161 "bdev_reset_iostat", 00:06:42.161 "bdev_get_iostat", 00:06:42.161 "bdev_examine", 00:06:42.161 "bdev_wait_for_examine", 00:06:42.161 "bdev_set_options", 00:06:42.161 "accel_get_stats", 00:06:42.161 "accel_set_options", 00:06:42.161 "accel_set_driver", 00:06:42.161 "accel_crypto_key_destroy", 00:06:42.161 "accel_crypto_keys_get", 00:06:42.161 "accel_crypto_key_create", 00:06:42.161 "accel_assign_opc", 00:06:42.161 "accel_get_module_info", 00:06:42.161 "accel_get_opc_assignments", 00:06:42.161 "vmd_rescan", 00:06:42.161 "vmd_remove_device", 00:06:42.161 "vmd_enable", 00:06:42.161 "sock_get_default_impl", 00:06:42.161 "sock_set_default_impl", 00:06:42.161 "sock_impl_set_options", 00:06:42.161 "sock_impl_get_options", 00:06:42.161 "iobuf_get_stats", 00:06:42.161 "iobuf_set_options", 00:06:42.161 "keyring_get_keys", 00:06:42.161 "vfu_tgt_set_base_path", 00:06:42.161 "framework_get_pci_devices", 00:06:42.161 "framework_get_config", 00:06:42.161 "framework_get_subsystems", 00:06:42.161 "fsdev_set_opts", 00:06:42.161 "fsdev_get_opts", 00:06:42.161 "trace_get_info", 00:06:42.161 "trace_get_tpoint_group_mask", 00:06:42.161 "trace_disable_tpoint_group", 00:06:42.161 "trace_enable_tpoint_group", 00:06:42.161 "trace_clear_tpoint_mask", 00:06:42.161 "trace_set_tpoint_mask", 00:06:42.161 "notify_get_notifications", 00:06:42.161 "notify_get_types", 00:06:42.161 "spdk_get_version", 00:06:42.161 "rpc_get_methods" 00:06:42.161 ] 00:06:42.161 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.161 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:42.161 01:24:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 767600 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 767600 ']' 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 767600 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 767600 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 767600' 00:06:42.161 killing process with pid 767600 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 767600 00:06:42.161 01:24:21 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 767600 00:06:42.729 00:06:42.729 real 0m1.349s 00:06:42.729 user 0m2.367s 00:06:42.729 sys 0m0.484s 00:06:42.729 01:24:22 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.729 01:24:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 END TEST spdkcli_tcp 00:06:42.729 ************************************ 00:06:42.729 01:24:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:42.729 01:24:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.729 01:24:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.729 01:24:22 -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 START TEST dpdk_mem_utility 00:06:42.729 ************************************ 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:42.729 * Looking for test storage... 00:06:42.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.729 01:24:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.729 --rc genhtml_branch_coverage=1 00:06:42.729 --rc genhtml_function_coverage=1 00:06:42.729 --rc genhtml_legend=1 00:06:42.729 --rc geninfo_all_blocks=1 00:06:42.729 --rc geninfo_unexecuted_blocks=1 00:06:42.729 00:06:42.729 ' 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.729 --rc genhtml_branch_coverage=1 00:06:42.729 --rc genhtml_function_coverage=1 00:06:42.729 --rc genhtml_legend=1 00:06:42.729 --rc geninfo_all_blocks=1 00:06:42.729 --rc geninfo_unexecuted_blocks=1 00:06:42.729 00:06:42.729 ' 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.729 --rc genhtml_branch_coverage=1 00:06:42.729 --rc genhtml_function_coverage=1 00:06:42.729 --rc genhtml_legend=1 00:06:42.729 --rc geninfo_all_blocks=1 00:06:42.729 --rc geninfo_unexecuted_blocks=1 00:06:42.729 00:06:42.729 ' 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.729 --rc genhtml_branch_coverage=1 00:06:42.729 --rc genhtml_function_coverage=1 00:06:42.729 --rc genhtml_legend=1 00:06:42.729 --rc geninfo_all_blocks=1 00:06:42.729 --rc geninfo_unexecuted_blocks=1 00:06:42.729 00:06:42.729 ' 00:06:42.729 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.729 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=767923 00:06:42.729 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.729 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 767923 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 767923 ']' 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.729 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 [2024-10-01 01:24:22.575468] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:42.729 [2024-10-01 01:24:22.575564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767923 ] 00:06:42.990 [2024-10-01 01:24:22.634461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.990 [2024-10-01 01:24:22.722389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.249 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.249 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:43.249 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:43.249 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:43.249 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.249 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.249 { 00:06:43.249 "filename": "/tmp/spdk_mem_dump.txt" 00:06:43.249 } 00:06:43.249 01:24:22 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.249 01:24:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:43.249 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:43.249 1 heaps totaling size 860.000000 MiB 00:06:43.249 size: 860.000000 MiB heap id: 0 00:06:43.249 end heaps---------- 00:06:43.249 9 mempools totaling size 642.649841 MiB 00:06:43.249 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:43.249 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:43.249 size: 92.545471 MiB name: bdev_io_767923 00:06:43.249 size: 51.011292 MiB name: evtpool_767923 00:06:43.249 size: 50.003479 MiB name: msgpool_767923 00:06:43.249 size: 36.509338 MiB name: fsdev_io_767923 00:06:43.249 size: 21.763794 MiB name: PDU_Pool 00:06:43.249 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:43.249 size: 0.026123 MiB name: Session_Pool 00:06:43.249 end mempools------- 00:06:43.249 6 memzones totaling size 4.142822 MiB 00:06:43.249 size: 1.000366 MiB name: RG_ring_0_767923 00:06:43.249 size: 1.000366 MiB name: RG_ring_1_767923 00:06:43.249 size: 1.000366 MiB name: RG_ring_4_767923 00:06:43.249 size: 1.000366 MiB name: RG_ring_5_767923 00:06:43.249 size: 0.125366 MiB name: RG_ring_2_767923 00:06:43.249 size: 0.015991 MiB name: RG_ring_3_767923 00:06:43.249 end memzones------- 00:06:43.249 01:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:43.508 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:43.508 list of free elements. size: 13.984680 MiB 00:06:43.508 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:43.508 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:43.508 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:43.508 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:43.508 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:43.508 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:43.508 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:43.508 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:43.508 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:43.508 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:43.508 element at address: 0x200003e00000 with size: 0.495605 MiB 00:06:43.508 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:43.508 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:43.508 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:43.508 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:43.508 element at address: 0x200003a00000 with size: 0.354858 MiB 00:06:43.508 list of standard malloc elements. size: 199.218628 MiB 00:06:43.508 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:43.508 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:43.508 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:43.508 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:43.508 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:43.508 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:43.508 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:43.508 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:43.508 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:43.508 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:43.508 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:43.508 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:43.508 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:43.508 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:43.508 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:43.508 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:43.508 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:43.509 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:43.509 list of memzone associated elements. size: 646.796692 MiB 00:06:43.509 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:43.509 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:43.509 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:43.509 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:43.509 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:43.509 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_767923_0 00:06:43.509 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:43.509 associated memzone info: size: 48.002930 MiB name: MP_evtpool_767923_0 00:06:43.509 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:43.509 associated memzone info: size: 48.002930 MiB name: MP_msgpool_767923_0 00:06:43.509 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:43.509 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_767923_0 00:06:43.509 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:43.509 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:43.509 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:43.509 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:43.509 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:43.509 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_767923 00:06:43.509 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:43.509 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_767923 00:06:43.509 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:43.509 associated memzone info: size: 1.007996 MiB name: MP_evtpool_767923 00:06:43.509 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:43.509 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:43.509 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:43.509 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:43.509 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:43.509 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:43.509 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:43.509 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:43.509 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:43.509 associated memzone info: size: 1.000366 MiB name: RG_ring_0_767923 00:06:43.509 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:43.509 associated memzone info: size: 1.000366 MiB name: RG_ring_1_767923 00:06:43.509 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:43.509 associated memzone info: size: 1.000366 MiB name: RG_ring_4_767923 00:06:43.509 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:43.509 associated memzone info: size: 1.000366 MiB name: RG_ring_5_767923 00:06:43.509 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:43.509 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_767923 00:06:43.509 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:43.509 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_767923 00:06:43.509 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:43.509 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:43.509 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:43.509 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:43.509 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:43.509 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:43.509 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:06:43.509 associated memzone info: size: 0.125366 MiB name: RG_ring_2_767923 00:06:43.509 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:43.509 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:43.509 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:43.509 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:43.509 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:06:43.509 associated memzone info: size: 0.015991 MiB name: RG_ring_3_767923 00:06:43.509 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:43.509 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:43.509 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:43.509 associated memzone info: size: 0.000183 MiB name: MP_msgpool_767923 00:06:43.509 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:43.509 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_767923 00:06:43.509 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:06:43.509 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_767923 00:06:43.509 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:43.509 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:43.509 01:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:43.509 01:24:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 767923 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 767923 ']' 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 767923 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 767923 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 767923' 00:06:43.509 killing process with pid 767923 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 767923 00:06:43.509 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 767923 00:06:43.768 00:06:43.768 real 0m1.194s 00:06:43.768 user 0m1.135s 00:06:43.768 sys 0m0.460s 00:06:43.768 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.768 01:24:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.768 ************************************ 00:06:43.768 END TEST dpdk_mem_utility 00:06:43.768 ************************************ 00:06:43.768 01:24:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:43.768 01:24:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.768 01:24:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.768 01:24:23 -- common/autotest_common.sh@10 -- # set +x 00:06:44.026 ************************************ 00:06:44.026 START TEST event 00:06:44.026 ************************************ 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:44.026 * Looking for test storage... 00:06:44.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.026 01:24:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.026 01:24:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.026 01:24:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.026 01:24:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.026 01:24:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.026 01:24:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.026 01:24:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.026 01:24:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.026 01:24:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.026 01:24:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.026 01:24:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.026 01:24:23 event -- scripts/common.sh@344 -- # case "$op" in 00:06:44.026 01:24:23 event -- scripts/common.sh@345 -- # : 1 00:06:44.026 01:24:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.026 01:24:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.026 01:24:23 event -- scripts/common.sh@365 -- # decimal 1 00:06:44.026 01:24:23 event -- scripts/common.sh@353 -- # local d=1 00:06:44.026 01:24:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.026 01:24:23 event -- scripts/common.sh@355 -- # echo 1 00:06:44.026 01:24:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.026 01:24:23 event -- scripts/common.sh@366 -- # decimal 2 00:06:44.026 01:24:23 event -- scripts/common.sh@353 -- # local d=2 00:06:44.026 01:24:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.026 01:24:23 event -- scripts/common.sh@355 -- # echo 2 00:06:44.026 01:24:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.026 01:24:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.026 01:24:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.026 01:24:23 event -- scripts/common.sh@368 -- # return 0 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.026 --rc genhtml_branch_coverage=1 00:06:44.026 --rc genhtml_function_coverage=1 00:06:44.026 --rc genhtml_legend=1 00:06:44.026 --rc geninfo_all_blocks=1 00:06:44.026 --rc geninfo_unexecuted_blocks=1 00:06:44.026 00:06:44.026 ' 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.026 --rc genhtml_branch_coverage=1 00:06:44.026 --rc genhtml_function_coverage=1 00:06:44.026 --rc genhtml_legend=1 00:06:44.026 --rc geninfo_all_blocks=1 00:06:44.026 --rc geninfo_unexecuted_blocks=1 00:06:44.026 00:06:44.026 ' 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.026 --rc genhtml_branch_coverage=1 00:06:44.026 --rc genhtml_function_coverage=1 00:06:44.026 --rc genhtml_legend=1 00:06:44.026 --rc geninfo_all_blocks=1 00:06:44.026 --rc geninfo_unexecuted_blocks=1 00:06:44.026 00:06:44.026 ' 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.026 --rc genhtml_branch_coverage=1 00:06:44.026 --rc genhtml_function_coverage=1 00:06:44.026 --rc genhtml_legend=1 00:06:44.026 --rc geninfo_all_blocks=1 00:06:44.026 --rc geninfo_unexecuted_blocks=1 00:06:44.026 00:06:44.026 ' 00:06:44.026 01:24:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:44.026 01:24:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.026 01:24:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:44.026 01:24:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.026 01:24:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.026 ************************************ 00:06:44.026 START TEST event_perf 00:06:44.026 ************************************ 00:06:44.026 01:24:23 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.026 Running I/O for 1 seconds...[2024-10-01 01:24:23.805941] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:44.026 [2024-10-01 01:24:23.806027] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768127 ] 00:06:44.026 [2024-10-01 01:24:23.870065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.288 [2024-10-01 01:24:23.965592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.288 [2024-10-01 01:24:23.965663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.288 [2024-10-01 01:24:23.965712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.288 [2024-10-01 01:24:23.965714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.223 Running I/O for 1 seconds... 00:06:45.223 lcore 0: 231851 00:06:45.223 lcore 1: 231849 00:06:45.223 lcore 2: 231849 00:06:45.223 lcore 3: 231849 00:06:45.223 done. 00:06:45.223 00:06:45.223 real 0m1.259s 00:06:45.223 user 0m4.166s 00:06:45.223 sys 0m0.088s 00:06:45.223 01:24:25 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.223 01:24:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.223 ************************************ 00:06:45.223 END TEST event_perf 00:06:45.223 ************************************ 00:06:45.223 01:24:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:45.223 01:24:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:45.223 01:24:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.223 01:24:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.481 ************************************ 00:06:45.481 START TEST event_reactor 00:06:45.481 ************************************ 00:06:45.481 01:24:25 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:45.481 [2024-10-01 01:24:25.111466] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:45.481 [2024-10-01 01:24:25.111536] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768282 ] 00:06:45.481 [2024-10-01 01:24:25.178910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.481 [2024-10-01 01:24:25.271850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.854 test_start 00:06:46.854 oneshot 00:06:46.854 tick 100 00:06:46.854 tick 100 00:06:46.854 tick 250 00:06:46.854 tick 100 00:06:46.854 tick 100 00:06:46.854 tick 100 00:06:46.854 tick 250 00:06:46.854 tick 500 00:06:46.854 tick 100 00:06:46.854 tick 100 00:06:46.854 tick 250 00:06:46.854 tick 100 00:06:46.854 tick 100 00:06:46.854 test_end 00:06:46.854 00:06:46.854 real 0m1.256s 00:06:46.854 user 0m1.162s 00:06:46.854 sys 0m0.090s 00:06:46.854 01:24:26 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.854 01:24:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:46.854 ************************************ 00:06:46.854 END TEST event_reactor 00:06:46.854 ************************************ 00:06:46.854 01:24:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.854 01:24:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:46.854 01:24:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.854 01:24:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.854 ************************************ 00:06:46.854 START TEST event_reactor_perf 00:06:46.854 ************************************ 00:06:46.854 01:24:26 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:46.854 [2024-10-01 01:24:26.414621] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:46.854 [2024-10-01 01:24:26.414682] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768440 ] 00:06:46.854 [2024-10-01 01:24:26.478287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.854 [2024-10-01 01:24:26.571069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.228 test_start 00:06:48.228 test_end 00:06:48.228 Performance: 357100 events per second 00:06:48.228 00:06:48.228 real 0m1.252s 00:06:48.228 user 0m1.165s 00:06:48.228 sys 0m0.081s 00:06:48.228 01:24:27 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.228 01:24:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.228 ************************************ 00:06:48.228 END TEST event_reactor_perf 00:06:48.228 ************************************ 00:06:48.228 01:24:27 event -- event/event.sh@49 -- # uname -s 00:06:48.228 01:24:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:48.228 01:24:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:48.228 01:24:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.228 01:24:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.228 01:24:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.228 ************************************ 00:06:48.228 START TEST event_scheduler 00:06:48.228 ************************************ 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:48.228 * Looking for test storage... 00:06:48.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.228 01:24:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.228 --rc genhtml_branch_coverage=1 00:06:48.228 --rc genhtml_function_coverage=1 00:06:48.228 --rc genhtml_legend=1 00:06:48.228 --rc geninfo_all_blocks=1 00:06:48.228 --rc geninfo_unexecuted_blocks=1 00:06:48.228 00:06:48.228 ' 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.228 --rc genhtml_branch_coverage=1 00:06:48.228 --rc genhtml_function_coverage=1 00:06:48.228 --rc genhtml_legend=1 00:06:48.228 --rc geninfo_all_blocks=1 00:06:48.228 --rc geninfo_unexecuted_blocks=1 00:06:48.228 00:06:48.228 ' 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.228 --rc genhtml_branch_coverage=1 00:06:48.228 --rc genhtml_function_coverage=1 00:06:48.228 --rc genhtml_legend=1 00:06:48.228 --rc geninfo_all_blocks=1 00:06:48.228 --rc geninfo_unexecuted_blocks=1 00:06:48.228 00:06:48.228 ' 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.228 --rc genhtml_branch_coverage=1 00:06:48.228 --rc genhtml_function_coverage=1 00:06:48.228 --rc genhtml_legend=1 00:06:48.228 --rc geninfo_all_blocks=1 00:06:48.228 --rc geninfo_unexecuted_blocks=1 00:06:48.228 00:06:48.228 ' 00:06:48.228 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:48.228 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=768636 00:06:48.228 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:48.228 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.228 01:24:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 768636 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 768636 ']' 00:06:48.228 01:24:27 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.229 01:24:27 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.229 01:24:27 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.229 01:24:27 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.229 01:24:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.229 [2024-10-01 01:24:27.888201] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:48.229 [2024-10-01 01:24:27.888281] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768636 ] 00:06:48.229 [2024-10-01 01:24:27.954721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.229 [2024-10-01 01:24:28.049723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.229 [2024-10-01 01:24:28.049809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.229 [2024-10-01 01:24:28.049812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.229 [2024-10-01 01:24:28.049747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.487 01:24:28 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.487 01:24:28 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:48.487 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:48.487 01:24:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.487 01:24:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.487 [2024-10-01 01:24:28.150972] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:48.487 [2024-10-01 01:24:28.151023] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:48.487 [2024-10-01 01:24:28.151042] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:48.487 [2024-10-01 01:24:28.151054] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:48.487 [2024-10-01 01:24:28.151074] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 [2024-10-01 01:24:28.247677] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 ************************************ 00:06:48.488 START TEST scheduler_create_thread 00:06:48.488 ************************************ 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 2 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 3 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 4 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 5 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 6 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 7 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.488 8 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.488 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.746 9 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.746 10 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.746 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.313 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.313 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:49.313 01:24:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:49.313 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.313 01:24:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.248 01:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.248 00:06:50.248 real 0m1.754s 00:06:50.248 user 0m0.011s 00:06:50.248 sys 0m0.005s 00:06:50.248 01:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.248 01:24:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.248 ************************************ 00:06:50.249 END TEST scheduler_create_thread 00:06:50.249 ************************************ 00:06:50.249 01:24:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:50.249 01:24:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 768636 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 768636 ']' 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 768636 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 768636 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 768636' 00:06:50.249 killing process with pid 768636 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 768636 00:06:50.249 01:24:30 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 768636 00:06:50.813 [2024-10-01 01:24:30.511435] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:51.071 00:06:51.071 real 0m3.033s 00:06:51.071 user 0m4.029s 00:06:51.071 sys 0m0.361s 00:06:51.071 01:24:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.071 01:24:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.071 ************************************ 00:06:51.071 END TEST event_scheduler 00:06:51.071 ************************************ 00:06:51.071 01:24:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:51.071 01:24:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:51.071 01:24:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.071 01:24:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.072 01:24:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.072 ************************************ 00:06:51.072 START TEST app_repeat 00:06:51.072 ************************************ 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=769080 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 769080' 00:06:51.072 Process app_repeat pid: 769080 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:51.072 spdk_app_start Round 0 00:06:51.072 01:24:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 769080 /var/tmp/spdk-nbd.sock 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 769080 ']' 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.072 01:24:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.072 [2024-10-01 01:24:30.814831] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:51.072 [2024-10-01 01:24:30.814894] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769080 ] 00:06:51.072 [2024-10-01 01:24:30.872620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.329 [2024-10-01 01:24:30.965230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.329 [2024-10-01 01:24:30.965234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.329 01:24:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.329 01:24:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.329 01:24:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.587 Malloc0 00:06:51.587 01:24:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.846 Malloc1 00:06:51.846 01:24:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.846 01:24:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.426 /dev/nbd0 00:06:52.426 01:24:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.426 01:24:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.426 01:24:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.426 1+0 records in 00:06:52.426 1+0 records out 00:06:52.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200698 s, 20.4 MB/s 00:06:52.426 01:24:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.426 01:24:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.427 01:24:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.427 01:24:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.427 01:24:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.427 01:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.427 01:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.427 01:24:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.698 /dev/nbd1 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.698 1+0 records in 00:06:52.698 1+0 records out 00:06:52.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192204 s, 21.3 MB/s 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.698 01:24:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.698 01:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.960 { 00:06:52.960 "nbd_device": "/dev/nbd0", 00:06:52.960 "bdev_name": "Malloc0" 00:06:52.960 }, 00:06:52.960 { 00:06:52.960 "nbd_device": "/dev/nbd1", 00:06:52.960 "bdev_name": "Malloc1" 00:06:52.960 } 00:06:52.960 ]' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.960 { 00:06:52.960 "nbd_device": "/dev/nbd0", 00:06:52.960 "bdev_name": "Malloc0" 00:06:52.960 }, 00:06:52.960 { 00:06:52.960 "nbd_device": "/dev/nbd1", 00:06:52.960 "bdev_name": "Malloc1" 00:06:52.960 } 00:06:52.960 ]' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.960 /dev/nbd1' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.960 /dev/nbd1' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.960 256+0 records in 00:06:52.960 256+0 records out 00:06:52.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385472 s, 272 MB/s 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.960 256+0 records in 00:06:52.960 256+0 records out 00:06:52.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020189 s, 51.9 MB/s 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.960 256+0 records in 00:06:52.960 256+0 records out 00:06:52.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236479 s, 44.3 MB/s 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.960 01:24:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.218 01:24:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.218 01:24:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.476 01:24:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.734 01:24:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.734 01:24:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.734 01:24:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.992 01:24:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.992 01:24:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.250 01:24:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.508 [2024-10-01 01:24:34.140917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.508 [2024-10-01 01:24:34.229722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.508 [2024-10-01 01:24:34.229723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.508 [2024-10-01 01:24:34.291358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.508 [2024-10-01 01:24:34.291456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.786 01:24:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.786 01:24:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.786 spdk_app_start Round 1 00:06:57.786 01:24:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 769080 /var/tmp/spdk-nbd.sock 00:06:57.786 01:24:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 769080 ']' 00:06:57.786 01:24:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.786 01:24:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.786 01:24:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.786 01:24:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.786 01:24:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.786 01:24:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.786 01:24:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:57.786 01:24:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.786 Malloc0 00:06:57.786 01:24:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.043 Malloc1 00:06:58.043 01:24:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.043 01:24:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.043 01:24:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.043 01:24:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.043 01:24:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.043 01:24:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.043 01:24:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.044 01:24:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.301 /dev/nbd0 00:06:58.301 01:24:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.301 01:24:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.301 01:24:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.301 1+0 records in 00:06:58.301 1+0 records out 00:06:58.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329732 s, 12.4 MB/s 00:06:58.302 01:24:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.302 01:24:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.302 01:24:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.302 01:24:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.302 01:24:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.302 01:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.302 01:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.302 01:24:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.558 /dev/nbd1 00:06:58.815 01:24:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.815 01:24:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.815 1+0 records in 00:06:58.815 1+0 records out 00:06:58.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002307 s, 17.8 MB/s 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.815 01:24:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.816 01:24:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.816 01:24:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.816 01:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.816 01:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.816 01:24:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.816 01:24:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.816 01:24:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.074 { 00:06:59.074 "nbd_device": "/dev/nbd0", 00:06:59.074 "bdev_name": "Malloc0" 00:06:59.074 }, 00:06:59.074 { 00:06:59.074 "nbd_device": "/dev/nbd1", 00:06:59.074 "bdev_name": "Malloc1" 00:06:59.074 } 00:06:59.074 ]' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.074 { 00:06:59.074 "nbd_device": "/dev/nbd0", 00:06:59.074 "bdev_name": "Malloc0" 00:06:59.074 }, 00:06:59.074 { 00:06:59.074 "nbd_device": "/dev/nbd1", 00:06:59.074 "bdev_name": "Malloc1" 00:06:59.074 } 00:06:59.074 ]' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.074 /dev/nbd1' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.074 /dev/nbd1' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.074 256+0 records in 00:06:59.074 256+0 records out 00:06:59.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536685 s, 195 MB/s 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.074 256+0 records in 00:06:59.074 256+0 records out 00:06:59.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219252 s, 47.8 MB/s 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.074 256+0 records in 00:06:59.074 256+0 records out 00:06:59.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209931 s, 49.9 MB/s 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.074 01:24:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.332 01:24:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.589 01:24:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.155 01:24:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.155 01:24:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.412 01:24:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.670 [2024-10-01 01:24:40.272802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.670 [2024-10-01 01:24:40.368980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.670 [2024-10-01 01:24:40.368981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.670 [2024-10-01 01:24:40.428467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.670 [2024-10-01 01:24:40.428541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.194 01:24:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.194 01:24:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.194 spdk_app_start Round 2 00:07:03.194 01:24:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 769080 /var/tmp/spdk-nbd.sock 00:07:03.194 01:24:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 769080 ']' 00:07:03.194 01:24:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.194 01:24:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.194 01:24:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.194 01:24:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.194 01:24:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.760 01:24:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.760 01:24:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:03.760 01:24:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.760 Malloc0 00:07:03.760 01:24:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.018 Malloc1 00:07:04.277 01:24:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.277 01:24:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.536 /dev/nbd0 00:07:04.536 01:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.536 01:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.536 1+0 records in 00:07:04.536 1+0 records out 00:07:04.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232338 s, 17.6 MB/s 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.536 01:24:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:04.536 01:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.536 01:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.536 01:24:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.794 /dev/nbd1 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.794 1+0 records in 00:07:04.794 1+0 records out 00:07:04.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198244 s, 20.7 MB/s 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.794 01:24:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.794 01:24:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.053 { 00:07:05.053 "nbd_device": "/dev/nbd0", 00:07:05.053 "bdev_name": "Malloc0" 00:07:05.053 }, 00:07:05.053 { 00:07:05.053 "nbd_device": "/dev/nbd1", 00:07:05.053 "bdev_name": "Malloc1" 00:07:05.053 } 00:07:05.053 ]' 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.053 { 00:07:05.053 "nbd_device": "/dev/nbd0", 00:07:05.053 "bdev_name": "Malloc0" 00:07:05.053 }, 00:07:05.053 { 00:07:05.053 "nbd_device": "/dev/nbd1", 00:07:05.053 "bdev_name": "Malloc1" 00:07:05.053 } 00:07:05.053 ]' 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.053 /dev/nbd1' 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.053 /dev/nbd1' 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.053 01:24:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.054 256+0 records in 00:07:05.054 256+0 records out 00:07:05.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507686 s, 207 MB/s 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.054 256+0 records in 00:07:05.054 256+0 records out 00:07:05.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206352 s, 50.8 MB/s 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.054 256+0 records in 00:07:05.054 256+0 records out 00:07:05.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238337 s, 44.0 MB/s 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.054 01:24:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.312 01:24:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.570 01:24:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.827 01:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.828 01:24:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.085 01:24:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.086 01:24:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.086 01:24:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.343 01:24:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.601 [2024-10-01 01:24:46.354137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.601 [2024-10-01 01:24:46.443736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.601 [2024-10-01 01:24:46.443742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.859 [2024-10-01 01:24:46.505907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.859 [2024-10-01 01:24:46.505995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.387 01:24:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 769080 /var/tmp/spdk-nbd.sock 00:07:09.387 01:24:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 769080 ']' 00:07:09.387 01:24:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.387 01:24:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.387 01:24:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.387 01:24:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.387 01:24:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:09.645 01:24:49 event.app_repeat -- event/event.sh@39 -- # killprocess 769080 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 769080 ']' 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 769080 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 769080 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 769080' 00:07:09.645 killing process with pid 769080 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 769080 00:07:09.645 01:24:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 769080 00:07:09.903 spdk_app_start is called in Round 0. 00:07:09.904 Shutdown signal received, stop current app iteration 00:07:09.904 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:07:09.904 spdk_app_start is called in Round 1. 00:07:09.904 Shutdown signal received, stop current app iteration 00:07:09.904 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:07:09.904 spdk_app_start is called in Round 2. 00:07:09.904 Shutdown signal received, stop current app iteration 00:07:09.904 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:07:09.904 spdk_app_start is called in Round 3. 00:07:09.904 Shutdown signal received, stop current app iteration 00:07:09.904 01:24:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:09.904 01:24:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:09.904 00:07:09.904 real 0m18.854s 00:07:09.904 user 0m41.455s 00:07:09.904 sys 0m3.256s 00:07:09.904 01:24:49 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.904 01:24:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.904 ************************************ 00:07:09.904 END TEST app_repeat 00:07:09.904 ************************************ 00:07:09.904 01:24:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:09.904 01:24:49 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:09.904 01:24:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.904 01:24:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.904 01:24:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.904 ************************************ 00:07:09.904 START TEST cpu_locks 00:07:09.904 ************************************ 00:07:09.904 01:24:49 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:09.904 * Looking for test storage... 00:07:09.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:09.904 01:24:49 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.904 01:24:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.904 01:24:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.162 01:24:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.162 01:24:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:10.162 01:24:49 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.163 --rc genhtml_branch_coverage=1 00:07:10.163 --rc genhtml_function_coverage=1 00:07:10.163 --rc genhtml_legend=1 00:07:10.163 --rc geninfo_all_blocks=1 00:07:10.163 --rc geninfo_unexecuted_blocks=1 00:07:10.163 00:07:10.163 ' 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.163 --rc genhtml_branch_coverage=1 00:07:10.163 --rc genhtml_function_coverage=1 00:07:10.163 --rc genhtml_legend=1 00:07:10.163 --rc geninfo_all_blocks=1 00:07:10.163 --rc geninfo_unexecuted_blocks=1 00:07:10.163 00:07:10.163 ' 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.163 --rc genhtml_branch_coverage=1 00:07:10.163 --rc genhtml_function_coverage=1 00:07:10.163 --rc genhtml_legend=1 00:07:10.163 --rc geninfo_all_blocks=1 00:07:10.163 --rc geninfo_unexecuted_blocks=1 00:07:10.163 00:07:10.163 ' 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.163 --rc genhtml_branch_coverage=1 00:07:10.163 --rc genhtml_function_coverage=1 00:07:10.163 --rc genhtml_legend=1 00:07:10.163 --rc geninfo_all_blocks=1 00:07:10.163 --rc geninfo_unexecuted_blocks=1 00:07:10.163 00:07:10.163 ' 00:07:10.163 01:24:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.163 01:24:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.163 01:24:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.163 01:24:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.163 01:24:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.163 ************************************ 00:07:10.163 START TEST default_locks 00:07:10.163 ************************************ 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=771568 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 771568 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 771568 ']' 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.163 01:24:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.163 [2024-10-01 01:24:49.919863] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:10.163 [2024-10-01 01:24:49.919960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771568 ] 00:07:10.163 [2024-10-01 01:24:49.978607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.421 [2024-10-01 01:24:50.074839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.680 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.680 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:10.680 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 771568 00:07:10.680 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 771568 00:07:10.680 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.938 lslocks: write error 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 771568 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 771568 ']' 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 771568 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 771568 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 771568' 00:07:10.938 killing process with pid 771568 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 771568 00:07:10.938 01:24:50 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 771568 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 771568 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 771568 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 771568 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 771568 ']' 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (771568) - No such process 00:07:11.505 ERROR: process (pid: 771568) is no longer running 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.505 00:07:11.505 real 0m1.235s 00:07:11.505 user 0m1.183s 00:07:11.505 sys 0m0.554s 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.505 01:24:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.505 ************************************ 00:07:11.505 END TEST default_locks 00:07:11.505 ************************************ 00:07:11.505 01:24:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.505 01:24:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.505 01:24:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.505 01:24:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.505 ************************************ 00:07:11.505 START TEST default_locks_via_rpc 00:07:11.505 ************************************ 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=771738 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 771738 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 771738 ']' 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.505 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.505 [2024-10-01 01:24:51.206741] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:11.505 [2024-10-01 01:24:51.206840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771738 ] 00:07:11.505 [2024-10-01 01:24:51.270368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.505 [2024-10-01 01:24:51.357842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 771738 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 771738 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 771738 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 771738 ']' 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 771738 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.072 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 771738 00:07:12.331 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.331 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.331 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 771738' 00:07:12.331 killing process with pid 771738 00:07:12.331 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 771738 00:07:12.331 01:24:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 771738 00:07:12.589 00:07:12.589 real 0m1.232s 00:07:12.589 user 0m1.175s 00:07:12.589 sys 0m0.554s 00:07:12.589 01:24:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.589 01:24:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.589 ************************************ 00:07:12.589 END TEST default_locks_via_rpc 00:07:12.589 ************************************ 00:07:12.589 01:24:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:12.589 01:24:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.589 01:24:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.589 01:24:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.589 ************************************ 00:07:12.589 START TEST non_locking_app_on_locked_coremask 00:07:12.589 ************************************ 00:07:12.589 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:12.589 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=771898 00:07:12.589 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 771898 /var/tmp/spdk.sock 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 771898 ']' 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.590 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-10-01 01:24:52.483422] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:12.849 [2024-10-01 01:24:52.483521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771898 ] 00:07:12.849 [2024-10-01 01:24:52.542305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.849 [2024-10-01 01:24:52.631498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.107 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.107 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:13.107 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=772020 00:07:13.107 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:13.107 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 772020 /var/tmp/spdk2.sock 00:07:13.107 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 772020 ']' 00:07:13.108 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.108 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.108 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.108 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.108 01:24:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.108 [2024-10-01 01:24:52.954869] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:13.108 [2024-10-01 01:24:52.954947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772020 ] 00:07:13.366 [2024-10-01 01:24:53.050441] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.366 [2024-10-01 01:24:53.050475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.624 [2024-10-01 01:24:53.240475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.190 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.190 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.190 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 771898 00:07:14.190 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 771898 00:07:14.190 01:24:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.756 lslocks: write error 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 771898 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 771898 ']' 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 771898 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 771898 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 771898' 00:07:14.756 killing process with pid 771898 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 771898 00:07:14.756 01:24:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 771898 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 772020 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 772020 ']' 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 772020 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772020 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772020' 00:07:15.692 killing process with pid 772020 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 772020 00:07:15.692 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 772020 00:07:15.950 00:07:15.950 real 0m3.312s 00:07:15.950 user 0m3.514s 00:07:15.950 sys 0m1.095s 00:07:15.950 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.950 01:24:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.950 ************************************ 00:07:15.950 END TEST non_locking_app_on_locked_coremask 00:07:15.950 ************************************ 00:07:15.950 01:24:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:15.950 01:24:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.950 01:24:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.950 01:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.950 ************************************ 00:07:15.950 START TEST locking_app_on_unlocked_coremask 00:07:15.950 ************************************ 00:07:15.950 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:15.950 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=772332 00:07:15.950 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:15.950 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 772332 /var/tmp/spdk.sock 00:07:15.950 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 772332 ']' 00:07:15.951 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.951 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.951 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.951 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.951 01:24:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.209 [2024-10-01 01:24:55.842331] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:16.210 [2024-10-01 01:24:55.842427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772332 ] 00:07:16.210 [2024-10-01 01:24:55.901805] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.210 [2024-10-01 01:24:55.901850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.210 [2024-10-01 01:24:55.991477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=772462 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 772462 /var/tmp/spdk2.sock 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 772462 ']' 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.468 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.469 01:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.469 [2024-10-01 01:24:56.319418] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:16.469 [2024-10-01 01:24:56.319515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772462 ] 00:07:16.727 [2024-10-01 01:24:56.419907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.018 [2024-10-01 01:24:56.604953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.614 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.614 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.614 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 772462 00:07:17.614 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 772462 00:07:17.614 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.177 lslocks: write error 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 772332 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 772332 ']' 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 772332 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772332 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772332' 00:07:18.177 killing process with pid 772332 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 772332 00:07:18.177 01:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 772332 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 772462 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 772462 ']' 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 772462 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772462 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772462' 00:07:19.110 killing process with pid 772462 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 772462 00:07:19.110 01:24:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 772462 00:07:19.368 00:07:19.368 real 0m3.295s 00:07:19.368 user 0m3.487s 00:07:19.368 sys 0m1.093s 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 ************************************ 00:07:19.368 END TEST locking_app_on_unlocked_coremask 00:07:19.368 ************************************ 00:07:19.368 01:24:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:19.368 01:24:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.368 01:24:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.368 01:24:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 ************************************ 00:07:19.368 START TEST locking_app_on_locked_coremask 00:07:19.368 ************************************ 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=772783 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 772783 /var/tmp/spdk.sock 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 772783 ']' 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.368 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.368 [2024-10-01 01:24:59.191974] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:19.368 [2024-10-01 01:24:59.192092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772783 ] 00:07:19.625 [2024-10-01 01:24:59.255870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.625 [2024-10-01 01:24:59.344599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=772893 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 772893 /var/tmp/spdk2.sock 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 772893 /var/tmp/spdk2.sock 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 772893 /var/tmp/spdk2.sock 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 772893 ']' 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.883 01:24:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.883 [2024-10-01 01:24:59.667743] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:19.883 [2024-10-01 01:24:59.667825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772893 ] 00:07:20.141 [2024-10-01 01:24:59.760541] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 772783 has claimed it. 00:07:20.141 [2024-10-01 01:24:59.760603] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (772893) - No such process 00:07:20.705 ERROR: process (pid: 772893) is no longer running 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 772783 00:07:20.705 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 772783 00:07:20.706 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.963 lslocks: write error 00:07:20.963 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 772783 00:07:20.963 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 772783 ']' 00:07:20.963 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 772783 00:07:20.963 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:20.963 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.963 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772783 00:07:21.221 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.221 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.221 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772783' 00:07:21.221 killing process with pid 772783 00:07:21.221 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 772783 00:07:21.221 01:25:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 772783 00:07:21.480 00:07:21.480 real 0m2.108s 00:07:21.480 user 0m2.285s 00:07:21.480 sys 0m0.687s 00:07:21.480 01:25:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.480 01:25:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.480 ************************************ 00:07:21.480 END TEST locking_app_on_locked_coremask 00:07:21.480 ************************************ 00:07:21.480 01:25:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:21.480 01:25:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.480 01:25:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.480 01:25:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.480 ************************************ 00:07:21.480 START TEST locking_overlapped_coremask 00:07:21.480 ************************************ 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=773087 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 773087 /var/tmp/spdk.sock 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 773087 ']' 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.480 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.739 [2024-10-01 01:25:01.353890] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:21.739 [2024-10-01 01:25:01.353987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773087 ] 00:07:21.739 [2024-10-01 01:25:01.418332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.739 [2024-10-01 01:25:01.508818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.739 [2024-10-01 01:25:01.508873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.739 [2024-10-01 01:25:01.508889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=773099 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 773099 /var/tmp/spdk2.sock 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 773099 /var/tmp/spdk2.sock 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 773099 /var/tmp/spdk2.sock 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 773099 ']' 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.997 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.998 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.998 01:25:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.998 [2024-10-01 01:25:01.836675] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:21.998 [2024-10-01 01:25:01.836780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773099 ] 00:07:22.255 [2024-10-01 01:25:01.930071] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 773087 has claimed it. 00:07:22.255 [2024-10-01 01:25:01.930136] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:22.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (773099) - No such process 00:07:22.820 ERROR: process (pid: 773099) is no longer running 00:07:22.820 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.820 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 773087 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 773087 ']' 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 773087 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 773087 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 773087' 00:07:22.821 killing process with pid 773087 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 773087 00:07:22.821 01:25:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 773087 00:07:23.386 00:07:23.386 real 0m1.716s 00:07:23.386 user 0m4.680s 00:07:23.386 sys 0m0.509s 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.386 ************************************ 00:07:23.386 END TEST locking_overlapped_coremask 00:07:23.386 ************************************ 00:07:23.386 01:25:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:23.386 01:25:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.386 01:25:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.386 01:25:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.386 ************************************ 00:07:23.386 START TEST locking_overlapped_coremask_via_rpc 00:07:23.386 ************************************ 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=773376 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 773376 /var/tmp/spdk.sock 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 773376 ']' 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.386 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.386 [2024-10-01 01:25:03.121285] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:23.386 [2024-10-01 01:25:03.121384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773376 ] 00:07:23.386 [2024-10-01 01:25:03.185574] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.386 [2024-10-01 01:25:03.185613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.644 [2024-10-01 01:25:03.279444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.644 [2024-10-01 01:25:03.279525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.644 [2024-10-01 01:25:03.279528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=773392 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 773392 /var/tmp/spdk2.sock 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 773392 ']' 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.903 01:25:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.903 [2024-10-01 01:25:03.606354] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:23.903 [2024-10-01 01:25:03.606451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773392 ] 00:07:23.903 [2024-10-01 01:25:03.693439] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.903 [2024-10-01 01:25:03.693488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.161 [2024-10-01 01:25:03.864497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.161 [2024-10-01 01:25:03.868092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:24.161 [2024-10-01 01:25:03.868094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.093 [2024-10-01 01:25:04.602105] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 773376 has claimed it. 00:07:25.093 request: 00:07:25.093 { 00:07:25.093 "method": "framework_enable_cpumask_locks", 00:07:25.093 "req_id": 1 00:07:25.093 } 00:07:25.093 Got JSON-RPC error response 00:07:25.093 response: 00:07:25.093 { 00:07:25.093 "code": -32603, 00:07:25.093 "message": "Failed to claim CPU core: 2" 00:07:25.093 } 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 773376 /var/tmp/spdk.sock 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 773376 ']' 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.093 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 773392 /var/tmp/spdk2.sock 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 773392 ']' 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.094 01:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:25.351 00:07:25.351 real 0m2.079s 00:07:25.351 user 0m1.131s 00:07:25.351 sys 0m0.164s 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.351 01:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.351 ************************************ 00:07:25.351 END TEST locking_overlapped_coremask_via_rpc 00:07:25.351 ************************************ 00:07:25.351 01:25:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:25.351 01:25:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 773376 ]] 00:07:25.351 01:25:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 773376 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 773376 ']' 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 773376 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 773376 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 773376' 00:07:25.351 killing process with pid 773376 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 773376 00:07:25.351 01:25:05 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 773376 00:07:25.916 01:25:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 773392 ]] 00:07:25.916 01:25:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 773392 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 773392 ']' 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 773392 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 773392 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 773392' 00:07:25.916 killing process with pid 773392 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 773392 00:07:25.916 01:25:05 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 773392 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 773376 ]] 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 773376 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 773376 ']' 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 773376 00:07:26.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (773376) - No such process 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 773376 is not found' 00:07:26.481 Process with pid 773376 is not found 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 773392 ]] 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 773392 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 773392 ']' 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 773392 00:07:26.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (773392) - No such process 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 773392 is not found' 00:07:26.481 Process with pid 773392 is not found 00:07:26.481 01:25:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:26.481 00:07:26.481 real 0m16.406s 00:07:26.481 user 0m29.026s 00:07:26.481 sys 0m5.612s 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.481 01:25:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.481 ************************************ 00:07:26.481 END TEST cpu_locks 00:07:26.481 ************************************ 00:07:26.481 00:07:26.481 real 0m42.497s 00:07:26.481 user 1m21.206s 00:07:26.481 sys 0m9.746s 00:07:26.481 01:25:06 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.481 01:25:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.481 ************************************ 00:07:26.481 END TEST event 00:07:26.481 ************************************ 00:07:26.481 01:25:06 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:26.481 01:25:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.481 01:25:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.481 01:25:06 -- common/autotest_common.sh@10 -- # set +x 00:07:26.481 ************************************ 00:07:26.481 START TEST thread 00:07:26.481 ************************************ 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:26.481 * Looking for test storage... 00:07:26.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.481 01:25:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.481 01:25:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.481 01:25:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.481 01:25:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.481 01:25:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.481 01:25:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.481 01:25:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.481 01:25:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.481 01:25:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.481 01:25:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.481 01:25:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.481 01:25:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:26.481 01:25:06 thread -- scripts/common.sh@345 -- # : 1 00:07:26.481 01:25:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.481 01:25:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.481 01:25:06 thread -- scripts/common.sh@365 -- # decimal 1 00:07:26.481 01:25:06 thread -- scripts/common.sh@353 -- # local d=1 00:07:26.481 01:25:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.481 01:25:06 thread -- scripts/common.sh@355 -- # echo 1 00:07:26.481 01:25:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.481 01:25:06 thread -- scripts/common.sh@366 -- # decimal 2 00:07:26.481 01:25:06 thread -- scripts/common.sh@353 -- # local d=2 00:07:26.481 01:25:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.481 01:25:06 thread -- scripts/common.sh@355 -- # echo 2 00:07:26.481 01:25:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.481 01:25:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.481 01:25:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.481 01:25:06 thread -- scripts/common.sh@368 -- # return 0 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.481 01:25:06 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.481 --rc genhtml_branch_coverage=1 00:07:26.481 --rc genhtml_function_coverage=1 00:07:26.482 --rc genhtml_legend=1 00:07:26.482 --rc geninfo_all_blocks=1 00:07:26.482 --rc geninfo_unexecuted_blocks=1 00:07:26.482 00:07:26.482 ' 00:07:26.482 01:25:06 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.482 --rc genhtml_branch_coverage=1 00:07:26.482 --rc genhtml_function_coverage=1 00:07:26.482 --rc genhtml_legend=1 00:07:26.482 --rc geninfo_all_blocks=1 00:07:26.482 --rc geninfo_unexecuted_blocks=1 00:07:26.482 00:07:26.482 ' 00:07:26.482 01:25:06 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.482 --rc genhtml_branch_coverage=1 00:07:26.482 --rc genhtml_function_coverage=1 00:07:26.482 --rc genhtml_legend=1 00:07:26.482 --rc geninfo_all_blocks=1 00:07:26.482 --rc geninfo_unexecuted_blocks=1 00:07:26.482 00:07:26.482 ' 00:07:26.482 01:25:06 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.482 --rc genhtml_branch_coverage=1 00:07:26.482 --rc genhtml_function_coverage=1 00:07:26.482 --rc genhtml_legend=1 00:07:26.482 --rc geninfo_all_blocks=1 00:07:26.482 --rc geninfo_unexecuted_blocks=1 00:07:26.482 00:07:26.482 ' 00:07:26.482 01:25:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:26.482 01:25:06 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:26.482 01:25:06 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.482 01:25:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.739 ************************************ 00:07:26.739 START TEST thread_poller_perf 00:07:26.739 ************************************ 00:07:26.740 01:25:06 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:26.740 [2024-10-01 01:25:06.349952] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:26.740 [2024-10-01 01:25:06.350037] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773855 ] 00:07:26.740 [2024-10-01 01:25:06.408360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.740 [2024-10-01 01:25:06.498603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.740 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:28.110 ====================================== 00:07:28.110 busy:2710776374 (cyc) 00:07:28.110 total_run_count: 292000 00:07:28.110 tsc_hz: 2700000000 (cyc) 00:07:28.110 ====================================== 00:07:28.110 poller_cost: 9283 (cyc), 3438 (nsec) 00:07:28.110 00:07:28.110 real 0m1.251s 00:07:28.110 user 0m1.158s 00:07:28.110 sys 0m0.087s 00:07:28.110 01:25:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.110 01:25:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.110 ************************************ 00:07:28.110 END TEST thread_poller_perf 00:07:28.110 ************************************ 00:07:28.110 01:25:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.110 01:25:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:28.110 01:25:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.110 01:25:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.110 ************************************ 00:07:28.110 START TEST thread_poller_perf 00:07:28.110 ************************************ 00:07:28.110 01:25:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.110 [2024-10-01 01:25:07.651125] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:28.110 [2024-10-01 01:25:07.651192] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774042 ] 00:07:28.110 [2024-10-01 01:25:07.714079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.110 [2024-10-01 01:25:07.806765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.110 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:29.042 ====================================== 00:07:29.042 busy:2702576296 (cyc) 00:07:29.042 total_run_count: 3856000 00:07:29.042 tsc_hz: 2700000000 (cyc) 00:07:29.042 ====================================== 00:07:29.042 poller_cost: 700 (cyc), 259 (nsec) 00:07:29.042 00:07:29.042 real 0m1.253s 00:07:29.042 user 0m1.163s 00:07:29.042 sys 0m0.084s 00:07:29.042 01:25:08 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.042 01:25:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.042 ************************************ 00:07:29.042 END TEST thread_poller_perf 00:07:29.042 ************************************ 00:07:29.303 01:25:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:29.303 00:07:29.303 real 0m2.747s 00:07:29.303 user 0m2.462s 00:07:29.303 sys 0m0.288s 00:07:29.303 01:25:08 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.303 01:25:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.303 ************************************ 00:07:29.303 END TEST thread 00:07:29.303 ************************************ 00:07:29.303 01:25:08 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:29.303 01:25:08 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.303 01:25:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.303 01:25:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.303 01:25:08 -- common/autotest_common.sh@10 -- # set +x 00:07:29.303 ************************************ 00:07:29.303 START TEST app_cmdline 00:07:29.303 ************************************ 00:07:29.303 01:25:08 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.303 * Looking for test storage... 00:07:29.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.303 01:25:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.303 --rc genhtml_branch_coverage=1 00:07:29.303 --rc genhtml_function_coverage=1 00:07:29.303 --rc genhtml_legend=1 00:07:29.303 --rc geninfo_all_blocks=1 00:07:29.303 --rc geninfo_unexecuted_blocks=1 00:07:29.303 00:07:29.303 ' 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.303 --rc genhtml_branch_coverage=1 00:07:29.303 --rc genhtml_function_coverage=1 00:07:29.303 --rc genhtml_legend=1 00:07:29.303 --rc geninfo_all_blocks=1 00:07:29.303 --rc geninfo_unexecuted_blocks=1 00:07:29.303 00:07:29.303 ' 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.303 --rc genhtml_branch_coverage=1 00:07:29.303 --rc genhtml_function_coverage=1 00:07:29.303 --rc genhtml_legend=1 00:07:29.303 --rc geninfo_all_blocks=1 00:07:29.303 --rc geninfo_unexecuted_blocks=1 00:07:29.303 00:07:29.303 ' 00:07:29.303 01:25:09 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.303 --rc genhtml_branch_coverage=1 00:07:29.303 --rc genhtml_function_coverage=1 00:07:29.303 --rc genhtml_legend=1 00:07:29.303 --rc geninfo_all_blocks=1 00:07:29.303 --rc geninfo_unexecuted_blocks=1 00:07:29.303 00:07:29.303 ' 00:07:29.303 01:25:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:29.303 01:25:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=774247 00:07:29.304 01:25:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:29.304 01:25:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 774247 00:07:29.304 01:25:09 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 774247 ']' 00:07:29.304 01:25:09 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.304 01:25:09 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.304 01:25:09 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.304 01:25:09 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.304 01:25:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.304 [2024-10-01 01:25:09.151675] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:29.304 [2024-10-01 01:25:09.151764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774247 ] 00:07:29.562 [2024-10-01 01:25:09.210273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.562 [2024-10-01 01:25:09.296174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.820 01:25:09 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.820 01:25:09 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:29.820 01:25:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:30.077 { 00:07:30.077 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:07:30.077 "fields": { 00:07:30.077 "major": 25, 00:07:30.077 "minor": 1, 00:07:30.077 "patch": 0, 00:07:30.077 "suffix": "-pre", 00:07:30.077 "commit": "09cc66129" 00:07:30.077 } 00:07:30.077 } 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.077 01:25:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:30.077 01:25:09 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.334 request: 00:07:30.334 { 00:07:30.334 "method": "env_dpdk_get_mem_stats", 00:07:30.334 "req_id": 1 00:07:30.334 } 00:07:30.334 Got JSON-RPC error response 00:07:30.334 response: 00:07:30.334 { 00:07:30.334 "code": -32601, 00:07:30.334 "message": "Method not found" 00:07:30.334 } 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.334 01:25:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 774247 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 774247 ']' 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 774247 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.334 01:25:10 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774247 00:07:30.592 01:25:10 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.592 01:25:10 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.592 01:25:10 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774247' 00:07:30.592 killing process with pid 774247 00:07:30.592 01:25:10 app_cmdline -- common/autotest_common.sh@969 -- # kill 774247 00:07:30.592 01:25:10 app_cmdline -- common/autotest_common.sh@974 -- # wait 774247 00:07:30.849 00:07:30.849 real 0m1.676s 00:07:30.849 user 0m2.022s 00:07:30.849 sys 0m0.525s 00:07:30.849 01:25:10 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.849 01:25:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.849 ************************************ 00:07:30.849 END TEST app_cmdline 00:07:30.849 ************************************ 00:07:30.849 01:25:10 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:30.850 01:25:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.850 01:25:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.850 01:25:10 -- common/autotest_common.sh@10 -- # set +x 00:07:30.850 ************************************ 00:07:30.850 START TEST version 00:07:30.850 ************************************ 00:07:30.850 01:25:10 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:31.107 * Looking for test storage... 00:07:31.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.107 01:25:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.107 01:25:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.107 01:25:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.107 01:25:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.107 01:25:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.107 01:25:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.107 01:25:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.107 01:25:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.107 01:25:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.107 01:25:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.107 01:25:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.107 01:25:10 version -- scripts/common.sh@344 -- # case "$op" in 00:07:31.107 01:25:10 version -- scripts/common.sh@345 -- # : 1 00:07:31.107 01:25:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.107 01:25:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.107 01:25:10 version -- scripts/common.sh@365 -- # decimal 1 00:07:31.107 01:25:10 version -- scripts/common.sh@353 -- # local d=1 00:07:31.107 01:25:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.107 01:25:10 version -- scripts/common.sh@355 -- # echo 1 00:07:31.107 01:25:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.107 01:25:10 version -- scripts/common.sh@366 -- # decimal 2 00:07:31.107 01:25:10 version -- scripts/common.sh@353 -- # local d=2 00:07:31.107 01:25:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.107 01:25:10 version -- scripts/common.sh@355 -- # echo 2 00:07:31.107 01:25:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.107 01:25:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.107 01:25:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.107 01:25:10 version -- scripts/common.sh@368 -- # return 0 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.107 --rc genhtml_branch_coverage=1 00:07:31.107 --rc genhtml_function_coverage=1 00:07:31.107 --rc genhtml_legend=1 00:07:31.107 --rc geninfo_all_blocks=1 00:07:31.107 --rc geninfo_unexecuted_blocks=1 00:07:31.107 00:07:31.107 ' 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.107 --rc genhtml_branch_coverage=1 00:07:31.107 --rc genhtml_function_coverage=1 00:07:31.107 --rc genhtml_legend=1 00:07:31.107 --rc geninfo_all_blocks=1 00:07:31.107 --rc geninfo_unexecuted_blocks=1 00:07:31.107 00:07:31.107 ' 00:07:31.107 01:25:10 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.107 --rc genhtml_branch_coverage=1 00:07:31.107 --rc genhtml_function_coverage=1 00:07:31.107 --rc genhtml_legend=1 00:07:31.107 --rc geninfo_all_blocks=1 00:07:31.107 --rc geninfo_unexecuted_blocks=1 00:07:31.107 00:07:31.107 ' 00:07:31.108 01:25:10 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.108 --rc genhtml_branch_coverage=1 00:07:31.108 --rc genhtml_function_coverage=1 00:07:31.108 --rc genhtml_legend=1 00:07:31.108 --rc geninfo_all_blocks=1 00:07:31.108 --rc geninfo_unexecuted_blocks=1 00:07:31.108 00:07:31.108 ' 00:07:31.108 01:25:10 version -- app/version.sh@17 -- # get_header_version major 00:07:31.108 01:25:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # cut -f2 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.108 01:25:10 version -- app/version.sh@17 -- # major=25 00:07:31.108 01:25:10 version -- app/version.sh@18 -- # get_header_version minor 00:07:31.108 01:25:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # cut -f2 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.108 01:25:10 version -- app/version.sh@18 -- # minor=1 00:07:31.108 01:25:10 version -- app/version.sh@19 -- # get_header_version patch 00:07:31.108 01:25:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # cut -f2 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.108 01:25:10 version -- app/version.sh@19 -- # patch=0 00:07:31.108 01:25:10 version -- app/version.sh@20 -- # get_header_version suffix 00:07:31.108 01:25:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # cut -f2 00:07:31.108 01:25:10 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.108 01:25:10 version -- app/version.sh@20 -- # suffix=-pre 00:07:31.108 01:25:10 version -- app/version.sh@22 -- # version=25.1 00:07:31.108 01:25:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:31.108 01:25:10 version -- app/version.sh@28 -- # version=25.1rc0 00:07:31.108 01:25:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:31.108 01:25:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:31.108 01:25:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:31.108 01:25:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:31.108 00:07:31.108 real 0m0.199s 00:07:31.108 user 0m0.129s 00:07:31.108 sys 0m0.096s 00:07:31.108 01:25:10 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.108 01:25:10 version -- common/autotest_common.sh@10 -- # set +x 00:07:31.108 ************************************ 00:07:31.108 END TEST version 00:07:31.108 ************************************ 00:07:31.108 01:25:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:31.108 01:25:10 -- spdk/autotest.sh@194 -- # uname -s 00:07:31.108 01:25:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:31.108 01:25:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:31.108 01:25:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:31.108 01:25:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:31.108 01:25:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.108 01:25:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.108 01:25:10 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:31.108 01:25:10 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:31.108 01:25:10 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:31.108 01:25:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.108 01:25:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.108 01:25:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.108 ************************************ 00:07:31.108 START TEST nvmf_tcp 00:07:31.108 ************************************ 00:07:31.108 01:25:10 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:31.366 * Looking for test storage... 00:07:31.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.366 01:25:11 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.366 --rc genhtml_branch_coverage=1 00:07:31.366 --rc genhtml_function_coverage=1 00:07:31.366 --rc genhtml_legend=1 00:07:31.366 --rc geninfo_all_blocks=1 00:07:31.366 --rc geninfo_unexecuted_blocks=1 00:07:31.366 00:07:31.366 ' 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.366 --rc genhtml_branch_coverage=1 00:07:31.366 --rc genhtml_function_coverage=1 00:07:31.366 --rc genhtml_legend=1 00:07:31.366 --rc geninfo_all_blocks=1 00:07:31.366 --rc geninfo_unexecuted_blocks=1 00:07:31.366 00:07:31.366 ' 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.366 --rc genhtml_branch_coverage=1 00:07:31.366 --rc genhtml_function_coverage=1 00:07:31.366 --rc genhtml_legend=1 00:07:31.366 --rc geninfo_all_blocks=1 00:07:31.366 --rc geninfo_unexecuted_blocks=1 00:07:31.366 00:07:31.366 ' 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.366 --rc genhtml_branch_coverage=1 00:07:31.366 --rc genhtml_function_coverage=1 00:07:31.366 --rc genhtml_legend=1 00:07:31.366 --rc geninfo_all_blocks=1 00:07:31.366 --rc geninfo_unexecuted_blocks=1 00:07:31.366 00:07:31.366 ' 00:07:31.366 01:25:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:31.366 01:25:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:31.366 01:25:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.366 01:25:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.366 ************************************ 00:07:31.366 START TEST nvmf_target_core 00:07:31.366 ************************************ 00:07:31.366 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:31.366 * Looking for test storage... 00:07:31.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:31.366 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.366 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.366 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.625 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.626 --rc genhtml_branch_coverage=1 00:07:31.626 --rc genhtml_function_coverage=1 00:07:31.626 --rc genhtml_legend=1 00:07:31.626 --rc geninfo_all_blocks=1 00:07:31.626 --rc geninfo_unexecuted_blocks=1 00:07:31.626 00:07:31.626 ' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.626 --rc genhtml_branch_coverage=1 00:07:31.626 --rc genhtml_function_coverage=1 00:07:31.626 --rc genhtml_legend=1 00:07:31.626 --rc geninfo_all_blocks=1 00:07:31.626 --rc geninfo_unexecuted_blocks=1 00:07:31.626 00:07:31.626 ' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.626 --rc genhtml_branch_coverage=1 00:07:31.626 --rc genhtml_function_coverage=1 00:07:31.626 --rc genhtml_legend=1 00:07:31.626 --rc geninfo_all_blocks=1 00:07:31.626 --rc geninfo_unexecuted_blocks=1 00:07:31.626 00:07:31.626 ' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.626 --rc genhtml_branch_coverage=1 00:07:31.626 --rc genhtml_function_coverage=1 00:07:31.626 --rc genhtml_legend=1 00:07:31.626 --rc geninfo_all_blocks=1 00:07:31.626 --rc geninfo_unexecuted_blocks=1 00:07:31.626 00:07:31.626 ' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.626 ************************************ 00:07:31.626 START TEST nvmf_abort 00:07:31.626 ************************************ 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:31.626 * Looking for test storage... 00:07:31.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.626 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.627 --rc genhtml_branch_coverage=1 00:07:31.627 --rc genhtml_function_coverage=1 00:07:31.627 --rc genhtml_legend=1 00:07:31.627 --rc geninfo_all_blocks=1 00:07:31.627 --rc geninfo_unexecuted_blocks=1 00:07:31.627 00:07:31.627 ' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.627 --rc genhtml_branch_coverage=1 00:07:31.627 --rc genhtml_function_coverage=1 00:07:31.627 --rc genhtml_legend=1 00:07:31.627 --rc geninfo_all_blocks=1 00:07:31.627 --rc geninfo_unexecuted_blocks=1 00:07:31.627 00:07:31.627 ' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.627 --rc genhtml_branch_coverage=1 00:07:31.627 --rc genhtml_function_coverage=1 00:07:31.627 --rc genhtml_legend=1 00:07:31.627 --rc geninfo_all_blocks=1 00:07:31.627 --rc geninfo_unexecuted_blocks=1 00:07:31.627 00:07:31.627 ' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.627 --rc genhtml_branch_coverage=1 00:07:31.627 --rc genhtml_function_coverage=1 00:07:31.627 --rc genhtml_legend=1 00:07:31.627 --rc geninfo_all_blocks=1 00:07:31.627 --rc geninfo_unexecuted_blocks=1 00:07:31.627 00:07:31.627 ' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.627 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:31.628 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:31.628 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.628 01:25:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:34.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:34.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:34.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:34.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.158 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:07:34.159 00:07:34.159 --- 10.0.0.2 ping statistics --- 00:07:34.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.159 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:07:34.159 00:07:34.159 --- 10.0.0.1 ping statistics --- 00:07:34.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.159 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=776335 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 776335 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 776335 ']' 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.159 01:25:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.159 [2024-10-01 01:25:13.738855] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:34.159 [2024-10-01 01:25:13.738952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.159 [2024-10-01 01:25:13.811965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.159 [2024-10-01 01:25:13.904862] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.159 [2024-10-01 01:25:13.904930] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.159 [2024-10-01 01:25:13.904956] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.159 [2024-10-01 01:25:13.904969] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.159 [2024-10-01 01:25:13.904981] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.159 [2024-10-01 01:25:13.905094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.159 [2024-10-01 01:25:13.905185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.159 [2024-10-01 01:25:13.905189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.416 [2024-10-01 01:25:14.050503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:34.416 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.417 Malloc0 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.417 Delay0 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.417 [2024-10-01 01:25:14.118852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.417 01:25:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:34.417 [2024-10-01 01:25:14.223976] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:36.944 Initializing NVMe Controllers 00:07:36.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:36.944 controller IO queue size 128 less than required 00:07:36.944 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:36.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:36.944 Initialization complete. Launching workers. 00:07:36.944 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28951 00:07:36.944 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29012, failed to submit 62 00:07:36.944 success 28955, unsuccessful 57, failed 0 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.944 rmmod nvme_tcp 00:07:36.944 rmmod nvme_fabrics 00:07:36.944 rmmod nvme_keyring 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 776335 ']' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 776335 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 776335 ']' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 776335 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 776335 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 776335' 00:07:36.944 killing process with pid 776335 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 776335 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 776335 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.944 01:25:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:39.475 00:07:39.475 real 0m7.428s 00:07:39.475 user 0m10.687s 00:07:39.475 sys 0m2.568s 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:39.475 ************************************ 00:07:39.475 END TEST nvmf_abort 00:07:39.475 ************************************ 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.475 ************************************ 00:07:39.475 START TEST nvmf_ns_hotplug_stress 00:07:39.475 ************************************ 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:39.475 * Looking for test storage... 00:07:39.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.475 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.476 --rc genhtml_branch_coverage=1 00:07:39.476 --rc genhtml_function_coverage=1 00:07:39.476 --rc genhtml_legend=1 00:07:39.476 --rc geninfo_all_blocks=1 00:07:39.476 --rc geninfo_unexecuted_blocks=1 00:07:39.476 00:07:39.476 ' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.476 --rc genhtml_branch_coverage=1 00:07:39.476 --rc genhtml_function_coverage=1 00:07:39.476 --rc genhtml_legend=1 00:07:39.476 --rc geninfo_all_blocks=1 00:07:39.476 --rc geninfo_unexecuted_blocks=1 00:07:39.476 00:07:39.476 ' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.476 --rc genhtml_branch_coverage=1 00:07:39.476 --rc genhtml_function_coverage=1 00:07:39.476 --rc genhtml_legend=1 00:07:39.476 --rc geninfo_all_blocks=1 00:07:39.476 --rc geninfo_unexecuted_blocks=1 00:07:39.476 00:07:39.476 ' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.476 --rc genhtml_branch_coverage=1 00:07:39.476 --rc genhtml_function_coverage=1 00:07:39.476 --rc genhtml_legend=1 00:07:39.476 --rc geninfo_all_blocks=1 00:07:39.476 --rc geninfo_unexecuted_blocks=1 00:07:39.476 00:07:39.476 ' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.476 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:39.477 01:25:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:41.375 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:07:41.376 00:07:41.376 --- 10.0.0.2 ping statistics --- 00:07:41.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.376 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:07:41.376 00:07:41.376 --- 10.0.0.1 ping statistics --- 00:07:41.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.376 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=778687 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 778687 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 778687 ']' 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.376 01:25:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.376 [2024-10-01 01:25:21.027739] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:41.376 [2024-10-01 01:25:21.027826] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.376 [2024-10-01 01:25:21.103826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.376 [2024-10-01 01:25:21.193644] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.376 [2024-10-01 01:25:21.193711] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.376 [2024-10-01 01:25:21.193728] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.376 [2024-10-01 01:25:21.193741] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.376 [2024-10-01 01:25:21.193752] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.377 [2024-10-01 01:25:21.193858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.377 [2024-10-01 01:25:21.193959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.377 [2024-10-01 01:25:21.193962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:41.634 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:41.892 [2024-10-01 01:25:21.585754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.892 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.149 01:25:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.407 [2024-10-01 01:25:22.155799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.407 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.665 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:42.922 Malloc0 00:07:42.923 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:43.180 Delay0 00:07:43.180 01:25:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.438 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:43.695 NULL1 00:07:43.695 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:43.954 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=778995 00:07:43.954 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:43.954 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:43.954 01:25:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.360 Read completed with error (sct=0, sc=11) 00:07:45.360 01:25:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.618 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:45.618 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:45.876 true 00:07:45.876 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:45.876 01:25:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.441 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.006 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:47.006 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:47.006 true 00:07:47.006 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:47.006 01:25:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.264 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.522 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:47.522 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:47.779 true 00:07:47.779 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:47.779 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.345 01:25:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.345 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:48.345 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:48.603 true 00:07:48.603 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:48.603 01:25:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.974 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.974 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:49.974 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:50.231 true 00:07:50.232 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:50.232 01:25:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.488 01:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.745 01:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:50.745 01:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:51.003 true 00:07:51.003 01:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:51.003 01:25:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.566 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.566 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:51.566 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:51.823 true 00:07:51.823 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:51.823 01:25:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.196 01:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.196 01:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:53.196 01:25:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:53.454 true 00:07:53.454 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:53.454 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.711 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.969 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:53.969 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:54.227 true 00:07:54.227 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:54.227 01:25:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.484 01:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.741 01:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:54.741 01:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:54.998 true 00:07:54.998 01:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:54.998 01:25:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.931 01:25:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.189 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:56.189 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:56.446 true 00:07:56.446 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:56.446 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.703 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.267 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:57.267 01:25:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:57.267 true 00:07:57.267 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:57.267 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.524 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.783 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:57.783 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:58.040 true 00:07:58.040 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:58.040 01:25:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.410 01:25:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.410 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:59.410 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:59.668 true 00:07:59.668 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:07:59.668 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.925 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.181 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:00.181 01:25:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:00.438 true 00:08:00.438 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:00.438 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.695 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.952 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:00.952 01:25:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:01.514 true 00:08:01.514 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:01.514 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.445 01:25:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.445 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:02.445 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:02.702 true 00:08:02.702 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:02.702 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.959 01:25:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.216 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:03.216 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:03.473 true 00:08:03.730 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:03.730 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.987 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.244 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:04.244 01:25:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:04.501 true 00:08:04.501 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:04.501 01:25:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.433 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.690 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:05.690 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:05.946 true 00:08:05.946 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:05.946 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.203 01:25:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.460 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:06.460 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:06.718 true 00:08:06.718 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:06.718 01:25:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.651 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.651 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:07.651 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:07.907 true 00:08:07.907 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:07.907 01:25:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.470 01:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.470 01:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:08.470 01:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:08.791 true 00:08:08.791 01:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:08.791 01:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.047 01:25:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.303 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:09.303 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:09.560 true 00:08:09.560 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:09.560 01:25:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.565 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.822 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:10.822 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:11.081 true 00:08:11.081 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:11.081 01:25:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.339 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.905 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:11.905 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:11.905 true 00:08:11.905 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:11.905 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.163 01:25:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.421 01:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:12.421 01:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:12.680 true 00:08:12.938 01:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:12.938 01:25:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.871 01:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.128 01:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:14.128 01:25:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:14.387 true 00:08:14.387 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:14.387 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.387 Initializing NVMe Controllers 00:08:14.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.387 Controller IO queue size 128, less than required. 00:08:14.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.387 Controller IO queue size 128, less than required. 00:08:14.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:14.387 Initialization complete. Launching workers. 00:08:14.387 ======================================================== 00:08:14.387 Latency(us) 00:08:14.387 Device Information : IOPS MiB/s Average min max 00:08:14.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 443.81 0.22 116471.12 3001.04 1093502.67 00:08:14.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8522.73 4.16 15018.37 3613.37 455824.58 00:08:14.387 ======================================================== 00:08:14.387 Total : 8966.54 4.38 20039.88 3001.04 1093502.67 00:08:14.387 00:08:14.645 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.903 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:14.903 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:15.161 true 00:08:15.161 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 778995 00:08:15.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (778995) - No such process 00:08:15.161 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 778995 00:08:15.161 01:25:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.419 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.678 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:15.678 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:15.678 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:15.678 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.678 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:15.935 null0 00:08:15.935 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.935 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.935 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:16.193 null1 00:08:16.193 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.193 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.193 01:25:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:16.451 null2 00:08:16.451 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.451 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.451 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:16.708 null3 00:08:16.708 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.708 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.709 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:16.967 null4 00:08:16.967 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.967 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.967 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:17.224 null5 00:08:17.224 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.224 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.224 01:25:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:17.482 null6 00:08:17.482 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.482 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.482 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:17.740 null7 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.740 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 783070 783071 783073 783075 783077 783079 783081 783083 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.741 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.999 01:25:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.564 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.565 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.822 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.822 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.822 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.822 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.822 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.823 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.823 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.080 01:25:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.340 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.598 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.856 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.114 01:25:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.681 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.939 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.197 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.197 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.197 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.197 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.197 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.198 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.198 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.198 01:26:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.455 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.713 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.971 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.972 01:26:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.229 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.229 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.229 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.230 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.230 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.230 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.230 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.230 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.488 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.488 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.488 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.488 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.488 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.488 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.746 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.004 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.262 01:26:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.520 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.778 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.779 rmmod nvme_tcp 00:08:23.779 rmmod nvme_fabrics 00:08:23.779 rmmod nvme_keyring 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 778687 ']' 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 778687 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 778687 ']' 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 778687 00:08:23.779 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778687 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778687' 00:08:24.037 killing process with pid 778687 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 778687 00:08:24.037 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 778687 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.295 01:26:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.198 00:08:26.198 real 0m47.171s 00:08:26.198 user 3m36.955s 00:08:26.198 sys 0m17.077s 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.198 ************************************ 00:08:26.198 END TEST nvmf_ns_hotplug_stress 00:08:26.198 ************************************ 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.198 01:26:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.198 ************************************ 00:08:26.198 START TEST nvmf_delete_subsystem 00:08:26.198 ************************************ 00:08:26.198 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:26.457 * Looking for test storage... 00:08:26.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:26.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.457 --rc genhtml_branch_coverage=1 00:08:26.457 --rc genhtml_function_coverage=1 00:08:26.457 --rc genhtml_legend=1 00:08:26.457 --rc geninfo_all_blocks=1 00:08:26.457 --rc geninfo_unexecuted_blocks=1 00:08:26.457 00:08:26.457 ' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:26.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.457 --rc genhtml_branch_coverage=1 00:08:26.457 --rc genhtml_function_coverage=1 00:08:26.457 --rc genhtml_legend=1 00:08:26.457 --rc geninfo_all_blocks=1 00:08:26.457 --rc geninfo_unexecuted_blocks=1 00:08:26.457 00:08:26.457 ' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:26.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.457 --rc genhtml_branch_coverage=1 00:08:26.457 --rc genhtml_function_coverage=1 00:08:26.457 --rc genhtml_legend=1 00:08:26.457 --rc geninfo_all_blocks=1 00:08:26.457 --rc geninfo_unexecuted_blocks=1 00:08:26.457 00:08:26.457 ' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:26.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.457 --rc genhtml_branch_coverage=1 00:08:26.457 --rc genhtml_function_coverage=1 00:08:26.457 --rc genhtml_legend=1 00:08:26.457 --rc geninfo_all_blocks=1 00:08:26.457 --rc geninfo_unexecuted_blocks=1 00:08:26.457 00:08:26.457 ' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.457 01:26:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:28.989 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:28.990 00:08:28.990 --- 10.0.0.2 ping statistics --- 00:08:28.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.990 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:08:28.990 00:08:28.990 --- 10.0.0.1 ping statistics --- 00:08:28.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.990 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=785973 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 785973 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 785973 ']' 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.990 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.990 [2024-10-01 01:26:08.556884] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:28.990 [2024-10-01 01:26:08.556979] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.990 [2024-10-01 01:26:08.639414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.990 [2024-10-01 01:26:08.733952] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.991 [2024-10-01 01:26:08.734045] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.991 [2024-10-01 01:26:08.734088] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.991 [2024-10-01 01:26:08.734114] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.991 [2024-10-01 01:26:08.734136] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.991 [2024-10-01 01:26:08.734207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.991 [2024-10-01 01:26:08.734217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 [2024-10-01 01:26:08.948585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 [2024-10-01 01:26:08.964818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 NULL1 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 Delay0 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=786114 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:29.248 01:26:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:29.248 [2024-10-01 01:26:09.039627] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:31.145 01:26:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.145 01:26:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.145 01:26:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.402 Write completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 starting I/O failed: -6 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Write completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 starting I/O failed: -6 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Write completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 starting I/O failed: -6 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Write completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 starting I/O failed: -6 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Write completed with error (sct=0, sc=8) 00:08:31.402 starting I/O failed: -6 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 starting I/O failed: -6 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.402 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 [2024-10-01 01:26:11.132089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fed0 is same with the state(6) to be set 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Write completed with error (sct=0, sc=8) 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 Read completed with error (sct=0, sc=8) 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:31.403 starting I/O failed: -6 00:08:32.337 [2024-10-01 01:26:12.099383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204dd00 is same with the state(6) to be set 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 [2024-10-01 01:26:12.131854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20505c0 is same with the state(6) to be set 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 [2024-10-01 01:26:12.135780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f3000cfe0 is same with the state(6) to be set 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 [2024-10-01 01:26:12.136049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f3000d640 is same with the state(6) to be set 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Write completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.337 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Write completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 Read completed with error (sct=0, sc=8) 00:08:32.338 [2024-10-01 01:26:12.136223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20500b0 is same with the state(6) to be set 00:08:32.338 Initializing NVMe Controllers 00:08:32.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.338 Controller IO queue size 128, less than required. 00:08:32.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:32.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:32.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:32.338 Initialization complete. Launching workers. 00:08:32.338 ======================================================== 00:08:32.338 Latency(us) 00:08:32.338 Device Information : IOPS MiB/s Average min max 00:08:32.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.80 0.08 901328.36 561.42 1011227.85 00:08:32.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.24 0.09 957251.85 474.72 2004287.51 00:08:32.338 ======================================================== 00:08:32.338 Total : 341.04 0.17 929900.62 474.72 2004287.51 00:08:32.338 00:08:32.338 [2024-10-01 01:26:12.137400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204dd00 (9): Bad file descriptor 00:08:32.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:32.338 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.338 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:32.338 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 786114 00:08:32.338 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 786114 00:08:32.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (786114) - No such process 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 786114 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 786114 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 786114 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.903 [2024-10-01 01:26:12.660538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=786522 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:32.903 01:26:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.903 [2024-10-01 01:26:12.723380] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:33.469 01:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.469 01:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:33.469 01:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.034 01:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.034 01:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:34.034 01:26:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.600 01:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.600 01:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:34.600 01:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.858 01:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.858 01:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:34.858 01:26:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.423 01:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:35.423 01:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:35.423 01:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.052 01:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.052 01:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:36.052 01:26:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.334 Initializing NVMe Controllers 00:08:36.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:36.334 Controller IO queue size 128, less than required. 00:08:36.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:36.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:36.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:36.334 Initialization complete. Launching workers. 00:08:36.334 ======================================================== 00:08:36.334 Latency(us) 00:08:36.334 Device Information : IOPS MiB/s Average min max 00:08:36.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003396.63 1000247.97 1012005.80 00:08:36.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006208.62 1000257.57 1043554.94 00:08:36.334 ======================================================== 00:08:36.334 Total : 256.00 0.12 1004802.62 1000247.97 1043554.94 00:08:36.334 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 786522 00:08:36.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (786522) - No such process 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 786522 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.592 rmmod nvme_tcp 00:08:36.592 rmmod nvme_fabrics 00:08:36.592 rmmod nvme_keyring 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 785973 ']' 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 785973 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 785973 ']' 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 785973 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 785973 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 785973' 00:08:36.592 killing process with pid 785973 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 785973 00:08:36.592 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 785973 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.851 01:26:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.754 00:08:38.754 real 0m12.566s 00:08:38.754 user 0m28.142s 00:08:38.754 sys 0m3.053s 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.754 ************************************ 00:08:38.754 END TEST nvmf_delete_subsystem 00:08:38.754 ************************************ 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.754 01:26:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 ************************************ 00:08:39.012 START TEST nvmf_host_management 00:08:39.012 ************************************ 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:39.012 * Looking for test storage... 00:08:39.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.012 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.013 --rc genhtml_branch_coverage=1 00:08:39.013 --rc genhtml_function_coverage=1 00:08:39.013 --rc genhtml_legend=1 00:08:39.013 --rc geninfo_all_blocks=1 00:08:39.013 --rc geninfo_unexecuted_blocks=1 00:08:39.013 00:08:39.013 ' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.013 --rc genhtml_branch_coverage=1 00:08:39.013 --rc genhtml_function_coverage=1 00:08:39.013 --rc genhtml_legend=1 00:08:39.013 --rc geninfo_all_blocks=1 00:08:39.013 --rc geninfo_unexecuted_blocks=1 00:08:39.013 00:08:39.013 ' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.013 --rc genhtml_branch_coverage=1 00:08:39.013 --rc genhtml_function_coverage=1 00:08:39.013 --rc genhtml_legend=1 00:08:39.013 --rc geninfo_all_blocks=1 00:08:39.013 --rc geninfo_unexecuted_blocks=1 00:08:39.013 00:08:39.013 ' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:39.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.013 --rc genhtml_branch_coverage=1 00:08:39.013 --rc genhtml_function_coverage=1 00:08:39.013 --rc genhtml_legend=1 00:08:39.013 --rc geninfo_all_blocks=1 00:08:39.013 --rc geninfo_unexecuted_blocks=1 00:08:39.013 00:08:39.013 ' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.013 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.014 01:26:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.545 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.546 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:08:41.546 00:08:41.546 --- 10.0.0.2 ping statistics --- 00:08:41.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.546 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:08:41.546 00:08:41.546 --- 10.0.0.1 ping statistics --- 00:08:41.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.546 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=788884 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 788884 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 788884 ']' 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.546 01:26:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.546 [2024-10-01 01:26:21.045935] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:41.546 [2024-10-01 01:26:21.046064] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.546 [2024-10-01 01:26:21.116489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.546 [2024-10-01 01:26:21.210093] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.546 [2024-10-01 01:26:21.210141] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.546 [2024-10-01 01:26:21.210172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.546 [2024-10-01 01:26:21.210185] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.546 [2024-10-01 01:26:21.210195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.546 [2024-10-01 01:26:21.210253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.546 [2024-10-01 01:26:21.210318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.546 [2024-10-01 01:26:21.210321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.547 [2024-10-01 01:26:21.210282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.547 [2024-10-01 01:26:21.371699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.547 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.805 Malloc0 00:08:41.805 [2024-10-01 01:26:21.432110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=789049 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 789049 /var/tmp/bdevperf.sock 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 789049 ']' 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:41.805 { 00:08:41.805 "params": { 00:08:41.805 "name": "Nvme$subsystem", 00:08:41.805 "trtype": "$TEST_TRANSPORT", 00:08:41.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.805 "adrfam": "ipv4", 00:08:41.805 "trsvcid": "$NVMF_PORT", 00:08:41.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.805 "hdgst": ${hdgst:-false}, 00:08:41.805 "ddgst": ${ddgst:-false} 00:08:41.805 }, 00:08:41.805 "method": "bdev_nvme_attach_controller" 00:08:41.805 } 00:08:41.805 EOF 00:08:41.805 )") 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:41.805 01:26:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:41.805 "params": { 00:08:41.805 "name": "Nvme0", 00:08:41.805 "trtype": "tcp", 00:08:41.805 "traddr": "10.0.0.2", 00:08:41.805 "adrfam": "ipv4", 00:08:41.805 "trsvcid": "4420", 00:08:41.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.805 "hdgst": false, 00:08:41.805 "ddgst": false 00:08:41.805 }, 00:08:41.805 "method": "bdev_nvme_attach_controller" 00:08:41.805 }' 00:08:41.805 [2024-10-01 01:26:21.513344] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:41.805 [2024-10-01 01:26:21.513419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789049 ] 00:08:41.805 [2024-10-01 01:26:21.577310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.063 [2024-10-01 01:26:21.664736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.320 Running I/O for 10 seconds... 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:42.320 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.577 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.837 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.837 [2024-10-01 01:26:22.443432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.443975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.837 [2024-10-01 01:26:22.444125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cf8b0 is same with the state(6) to be set 00:08:42.838 [2024-10-01 01:26:22.445669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.838 [2024-10-01 01:26:22.445710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.445728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.838 [2024-10-01 01:26:22.445742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.445756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.838 [2024-10-01 01:26:22.445770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.445784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:42.838 [2024-10-01 01:26:22.445797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.445809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2268090 is same with the state(6) to be set 00:08:42.838 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.838 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:42.838 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.838 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 [2024-10-01 01:26:22.451278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.451974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.451993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.452015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.838 [2024-10-01 01:26:22.452032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.838 [2024-10-01 01:26:22.452046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.839 [2024-10-01 01:26:22.452975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.839 [2024-10-01 01:26:22.452992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:42.840 [2024-10-01 01:26:22.453239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.840 [2024-10-01 01:26:22.453334] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2480e10 was disconnected and freed. reset controller. 00:08:42.840 [2024-10-01 01:26:22.454478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:42.840 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.840 01:26:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:42.840 task offset: 77440 on job bdev=Nvme0n1 fails 00:08:42.840 00:08:42.840 Latency(us) 00:08:42.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.840 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.840 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:42.840 Verification LBA range: start 0x0 length 0x400 00:08:42.840 Nvme0n1 : 0.40 1495.10 93.44 158.16 0.00 37617.74 2451.53 34952.53 00:08:42.840 =================================================================================================================== 00:08:42.840 Total : 1495.10 93.44 158.16 0.00 37617.74 2451.53 34952.53 00:08:42.840 [2024-10-01 01:26:22.456543] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.840 [2024-10-01 01:26:22.456572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2268090 (9): Bad file descriptor 00:08:42.840 [2024-10-01 01:26:22.558466] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 789049 00:08:43.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (789049) - No such process 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:43.773 { 00:08:43.773 "params": { 00:08:43.773 "name": "Nvme$subsystem", 00:08:43.773 "trtype": "$TEST_TRANSPORT", 00:08:43.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.773 "adrfam": "ipv4", 00:08:43.773 "trsvcid": "$NVMF_PORT", 00:08:43.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.773 "hdgst": ${hdgst:-false}, 00:08:43.773 "ddgst": ${ddgst:-false} 00:08:43.773 }, 00:08:43.773 "method": "bdev_nvme_attach_controller" 00:08:43.773 } 00:08:43.773 EOF 00:08:43.773 )") 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:43.773 01:26:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:43.773 "params": { 00:08:43.773 "name": "Nvme0", 00:08:43.773 "trtype": "tcp", 00:08:43.774 "traddr": "10.0.0.2", 00:08:43.774 "adrfam": "ipv4", 00:08:43.774 "trsvcid": "4420", 00:08:43.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:43.774 "hdgst": false, 00:08:43.774 "ddgst": false 00:08:43.774 }, 00:08:43.774 "method": "bdev_nvme_attach_controller" 00:08:43.774 }' 00:08:43.774 [2024-10-01 01:26:23.507519] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:43.774 [2024-10-01 01:26:23.507591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789214 ] 00:08:43.774 [2024-10-01 01:26:23.569047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.031 [2024-10-01 01:26:23.655948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.031 Running I/O for 1 seconds... 00:08:45.402 1555.00 IOPS, 97.19 MiB/s 00:08:45.402 Latency(us) 00:08:45.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.402 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:45.402 Verification LBA range: start 0x0 length 0x400 00:08:45.402 Nvme0n1 : 1.01 1600.90 100.06 0.00 0.00 39171.10 2427.26 33787.45 00:08:45.402 =================================================================================================================== 00:08:45.402 Total : 1600.90 100.06 0.00 0.00 39171.10 2427.26 33787.45 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.402 rmmod nvme_tcp 00:08:45.402 rmmod nvme_fabrics 00:08:45.402 rmmod nvme_keyring 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 788884 ']' 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 788884 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 788884 ']' 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 788884 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788884 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788884' 00:08:45.402 killing process with pid 788884 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 788884 00:08:45.402 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 788884 00:08:45.660 [2024-10-01 01:26:25.438640] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.660 01:26:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:48.194 00:08:48.194 real 0m8.899s 00:08:48.194 user 0m20.195s 00:08:48.194 sys 0m2.726s 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.194 ************************************ 00:08:48.194 END TEST nvmf_host_management 00:08:48.194 ************************************ 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.194 ************************************ 00:08:48.194 START TEST nvmf_lvol 00:08:48.194 ************************************ 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:48.194 * Looking for test storage... 00:08:48.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.194 --rc genhtml_branch_coverage=1 00:08:48.194 --rc genhtml_function_coverage=1 00:08:48.194 --rc genhtml_legend=1 00:08:48.194 --rc geninfo_all_blocks=1 00:08:48.194 --rc geninfo_unexecuted_blocks=1 00:08:48.194 00:08:48.194 ' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.194 --rc genhtml_branch_coverage=1 00:08:48.194 --rc genhtml_function_coverage=1 00:08:48.194 --rc genhtml_legend=1 00:08:48.194 --rc geninfo_all_blocks=1 00:08:48.194 --rc geninfo_unexecuted_blocks=1 00:08:48.194 00:08:48.194 ' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.194 --rc genhtml_branch_coverage=1 00:08:48.194 --rc genhtml_function_coverage=1 00:08:48.194 --rc genhtml_legend=1 00:08:48.194 --rc geninfo_all_blocks=1 00:08:48.194 --rc geninfo_unexecuted_blocks=1 00:08:48.194 00:08:48.194 ' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.194 --rc genhtml_branch_coverage=1 00:08:48.194 --rc genhtml_function_coverage=1 00:08:48.194 --rc genhtml_legend=1 00:08:48.194 --rc geninfo_all_blocks=1 00:08:48.194 --rc geninfo_unexecuted_blocks=1 00:08:48.194 00:08:48.194 ' 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.194 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.195 01:26:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.096 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:50.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:50.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:50.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:50.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:08:50.097 00:08:50.097 --- 10.0.0.2 ping statistics --- 00:08:50.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.097 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:50.097 00:08:50.097 --- 10.0.0.1 ping statistics --- 00:08:50.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.097 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=791419 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 791419 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 791419 ']' 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.097 01:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.356 [2024-10-01 01:26:29.952980] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:50.356 [2024-10-01 01:26:29.953115] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.356 [2024-10-01 01:26:30.023517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.356 [2024-10-01 01:26:30.119150] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.356 [2024-10-01 01:26:30.119215] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.356 [2024-10-01 01:26:30.119243] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.356 [2024-10-01 01:26:30.119255] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.356 [2024-10-01 01:26:30.119265] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.356 [2024-10-01 01:26:30.119357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.356 [2024-10-01 01:26:30.119412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.356 [2024-10-01 01:26:30.119415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.613 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.613 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:50.613 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:50.613 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.613 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.614 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.614 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.870 [2024-10-01 01:26:30.535386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.870 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.126 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:51.126 01:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.383 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:51.383 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:51.640 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:51.897 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=64bc8b8d-d637-4b65-b095-707b7ca79c1e 00:08:51.897 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 64bc8b8d-d637-4b65-b095-707b7ca79c1e lvol 20 00:08:52.154 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7b4c5bb9-d0cf-40ea-9dc6-5cf26c6f0e69 00:08:52.154 01:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.411 01:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b4c5bb9-d0cf-40ea-9dc6-5cf26c6f0e69 00:08:52.667 01:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:53.232 [2024-10-01 01:26:32.784557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.232 01:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.232 01:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=791850 00:08:53.232 01:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:53.232 01:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:54.604 01:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7b4c5bb9-d0cf-40ea-9dc6-5cf26c6f0e69 MY_SNAPSHOT 00:08:54.604 01:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a505a5a2-ee36-483d-811c-3d03e95edc58 00:08:54.604 01:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7b4c5bb9-d0cf-40ea-9dc6-5cf26c6f0e69 30 00:08:55.169 01:26:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a505a5a2-ee36-483d-811c-3d03e95edc58 MY_CLONE 00:08:55.427 01:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1193b556-de44-4f86-b63c-6d7a555cd0cc 00:08:55.427 01:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1193b556-de44-4f86-b63c-6d7a555cd0cc 00:08:55.993 01:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 791850 00:09:04.106 Initializing NVMe Controllers 00:09:04.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:04.106 Controller IO queue size 128, less than required. 00:09:04.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:04.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:04.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:04.106 Initialization complete. Launching workers. 00:09:04.106 ======================================================== 00:09:04.106 Latency(us) 00:09:04.106 Device Information : IOPS MiB/s Average min max 00:09:04.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10467.97 40.89 12231.20 1147.74 90309.60 00:09:04.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10415.17 40.68 12294.65 2165.52 73766.94 00:09:04.106 ======================================================== 00:09:04.106 Total : 20883.14 81.57 12262.85 1147.74 90309.60 00:09:04.106 00:09:04.106 01:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.106 01:26:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b4c5bb9-d0cf-40ea-9dc6-5cf26c6f0e69 00:09:04.364 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64bc8b8d-d637-4b65-b095-707b7ca79c1e 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.622 rmmod nvme_tcp 00:09:04.622 rmmod nvme_fabrics 00:09:04.622 rmmod nvme_keyring 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 791419 ']' 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 791419 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 791419 ']' 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 791419 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 791419 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 791419' 00:09:04.622 killing process with pid 791419 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 791419 00:09:04.622 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 791419 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.881 01:26:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.412 00:09:07.412 real 0m19.166s 00:09:07.412 user 1m5.778s 00:09:07.412 sys 0m5.355s 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:07.412 ************************************ 00:09:07.412 END TEST nvmf_lvol 00:09:07.412 ************************************ 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.412 ************************************ 00:09:07.412 START TEST nvmf_lvs_grow 00:09:07.412 ************************************ 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:07.412 * Looking for test storage... 00:09:07.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:07.412 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:07.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.413 --rc genhtml_branch_coverage=1 00:09:07.413 --rc genhtml_function_coverage=1 00:09:07.413 --rc genhtml_legend=1 00:09:07.413 --rc geninfo_all_blocks=1 00:09:07.413 --rc geninfo_unexecuted_blocks=1 00:09:07.413 00:09:07.413 ' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:07.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.413 --rc genhtml_branch_coverage=1 00:09:07.413 --rc genhtml_function_coverage=1 00:09:07.413 --rc genhtml_legend=1 00:09:07.413 --rc geninfo_all_blocks=1 00:09:07.413 --rc geninfo_unexecuted_blocks=1 00:09:07.413 00:09:07.413 ' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:07.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.413 --rc genhtml_branch_coverage=1 00:09:07.413 --rc genhtml_function_coverage=1 00:09:07.413 --rc genhtml_legend=1 00:09:07.413 --rc geninfo_all_blocks=1 00:09:07.413 --rc geninfo_unexecuted_blocks=1 00:09:07.413 00:09:07.413 ' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:07.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.413 --rc genhtml_branch_coverage=1 00:09:07.413 --rc genhtml_function_coverage=1 00:09:07.413 --rc genhtml_legend=1 00:09:07.413 --rc geninfo_all_blocks=1 00:09:07.413 --rc geninfo_unexecuted_blocks=1 00:09:07.413 00:09:07.413 ' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:07.413 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.414 01:26:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:09.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:09.319 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:09.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:09.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:09.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.320 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.321 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.321 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.321 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.321 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.321 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:09:09.580 00:09:09.580 --- 10.0.0.2 ping statistics --- 00:09:09.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.580 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:09:09.580 00:09:09.580 --- 10.0.0.1 ping statistics --- 00:09:09.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.580 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.580 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=795133 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 795133 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 795133 ']' 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.581 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.581 [2024-10-01 01:26:49.254262] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:09.581 [2024-10-01 01:26:49.254370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.581 [2024-10-01 01:26:49.320526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.581 [2024-10-01 01:26:49.409501] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.581 [2024-10-01 01:26:49.409559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.581 [2024-10-01 01:26:49.409587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.581 [2024-10-01 01:26:49.409598] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.581 [2024-10-01 01:26:49.409608] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.581 [2024-10-01 01:26:49.409635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.879 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.172 [2024-10-01 01:26:49.784440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.172 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:10.172 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.173 ************************************ 00:09:10.173 START TEST lvs_grow_clean 00:09:10.173 ************************************ 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.173 01:26:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.430 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:10.430 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.688 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:10.688 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:10.688 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.947 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.947 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.947 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f lvol 150 00:09:11.204 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=013b5471-32c8-42f2-bb6c-3b332bfeb20c 00:09:11.204 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:11.204 01:26:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:11.462 [2024-10-01 01:26:51.236563] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:11.462 [2024-10-01 01:26:51.236660] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:11.462 true 00:09:11.462 01:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:11.462 01:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:11.720 01:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:11.720 01:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.978 01:26:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 013b5471-32c8-42f2-bb6c-3b332bfeb20c 00:09:12.235 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:12.493 [2024-10-01 01:26:52.319916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.493 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=795573 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 795573 /var/tmp/bdevperf.sock 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 795573 ']' 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:12.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.751 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:13.009 [2024-10-01 01:26:52.644887] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:13.009 [2024-10-01 01:26:52.644972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795573 ] 00:09:13.009 [2024-10-01 01:26:52.708508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.009 [2024-10-01 01:26:52.800212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.266 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.266 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:13.266 01:26:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:13.524 Nvme0n1 00:09:13.524 01:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:13.781 [ 00:09:13.781 { 00:09:13.781 "name": "Nvme0n1", 00:09:13.781 "aliases": [ 00:09:13.781 "013b5471-32c8-42f2-bb6c-3b332bfeb20c" 00:09:13.781 ], 00:09:13.781 "product_name": "NVMe disk", 00:09:13.781 "block_size": 4096, 00:09:13.781 "num_blocks": 38912, 00:09:13.781 "uuid": "013b5471-32c8-42f2-bb6c-3b332bfeb20c", 00:09:13.781 "numa_id": 0, 00:09:13.781 "assigned_rate_limits": { 00:09:13.781 "rw_ios_per_sec": 0, 00:09:13.781 "rw_mbytes_per_sec": 0, 00:09:13.781 "r_mbytes_per_sec": 0, 00:09:13.781 "w_mbytes_per_sec": 0 00:09:13.781 }, 00:09:13.781 "claimed": false, 00:09:13.781 "zoned": false, 00:09:13.781 "supported_io_types": { 00:09:13.781 "read": true, 00:09:13.781 "write": true, 00:09:13.781 "unmap": true, 00:09:13.781 "flush": true, 00:09:13.781 "reset": true, 00:09:13.781 "nvme_admin": true, 00:09:13.781 "nvme_io": true, 00:09:13.781 "nvme_io_md": false, 00:09:13.781 "write_zeroes": true, 00:09:13.781 "zcopy": false, 00:09:13.781 "get_zone_info": false, 00:09:13.781 "zone_management": false, 00:09:13.781 "zone_append": false, 00:09:13.781 "compare": true, 00:09:13.781 "compare_and_write": true, 00:09:13.781 "abort": true, 00:09:13.781 "seek_hole": false, 00:09:13.781 "seek_data": false, 00:09:13.781 "copy": true, 00:09:13.781 "nvme_iov_md": false 00:09:13.781 }, 00:09:13.781 "memory_domains": [ 00:09:13.781 { 00:09:13.781 "dma_device_id": "system", 00:09:13.781 "dma_device_type": 1 00:09:13.781 } 00:09:13.781 ], 00:09:13.781 "driver_specific": { 00:09:13.781 "nvme": [ 00:09:13.781 { 00:09:13.781 "trid": { 00:09:13.781 "trtype": "TCP", 00:09:13.781 "adrfam": "IPv4", 00:09:13.781 "traddr": "10.0.0.2", 00:09:13.781 "trsvcid": "4420", 00:09:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:13.781 }, 00:09:13.781 "ctrlr_data": { 00:09:13.781 "cntlid": 1, 00:09:13.781 "vendor_id": "0x8086", 00:09:13.781 "model_number": "SPDK bdev Controller", 00:09:13.781 "serial_number": "SPDK0", 00:09:13.781 "firmware_revision": "25.01", 00:09:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:13.781 "oacs": { 00:09:13.781 "security": 0, 00:09:13.781 "format": 0, 00:09:13.781 "firmware": 0, 00:09:13.781 "ns_manage": 0 00:09:13.781 }, 00:09:13.781 "multi_ctrlr": true, 00:09:13.781 "ana_reporting": false 00:09:13.781 }, 00:09:13.781 "vs": { 00:09:13.781 "nvme_version": "1.3" 00:09:13.781 }, 00:09:13.781 "ns_data": { 00:09:13.781 "id": 1, 00:09:13.781 "can_share": true 00:09:13.781 } 00:09:13.781 } 00:09:13.781 ], 00:09:13.781 "mp_policy": "active_passive" 00:09:13.781 } 00:09:13.781 } 00:09:13.781 ] 00:09:13.781 01:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=795711 00:09:13.781 01:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:13.781 01:26:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:14.039 Running I/O for 10 seconds... 00:09:14.981 Latency(us) 00:09:14.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.981 Nvme0n1 : 1.00 14318.00 55.93 0.00 0.00 0.00 0.00 0.00 00:09:14.981 =================================================================================================================== 00:09:14.981 Total : 14318.00 55.93 0.00 0.00 0.00 0.00 0.00 00:09:14.981 00:09:15.913 01:26:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:15.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.913 Nvme0n1 : 2.00 14178.50 55.38 0.00 0.00 0.00 0.00 0.00 00:09:15.913 =================================================================================================================== 00:09:15.913 Total : 14178.50 55.38 0.00 0.00 0.00 0.00 0.00 00:09:15.913 00:09:16.170 true 00:09:16.170 01:26:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:16.170 01:26:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:16.426 01:26:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:16.426 01:26:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:16.426 01:26:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 795711 00:09:16.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.991 Nvme0n1 : 3.00 13985.67 54.63 0.00 0.00 0.00 0.00 0.00 00:09:16.991 =================================================================================================================== 00:09:16.991 Total : 13985.67 54.63 0.00 0.00 0.00 0.00 0.00 00:09:16.991 00:09:17.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.921 Nvme0n1 : 4.00 13879.25 54.22 0.00 0.00 0.00 0.00 0.00 00:09:17.921 =================================================================================================================== 00:09:17.921 Total : 13879.25 54.22 0.00 0.00 0.00 0.00 0.00 00:09:17.921 00:09:18.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.853 Nvme0n1 : 5.00 13836.20 54.05 0.00 0.00 0.00 0.00 0.00 00:09:18.853 =================================================================================================================== 00:09:18.853 Total : 13836.20 54.05 0.00 0.00 0.00 0.00 0.00 00:09:18.853 00:09:20.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.224 Nvme0n1 : 6.00 13822.17 53.99 0.00 0.00 0.00 0.00 0.00 00:09:20.224 =================================================================================================================== 00:09:20.224 Total : 13822.17 53.99 0.00 0.00 0.00 0.00 0.00 00:09:20.224 00:09:21.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.158 Nvme0n1 : 7.00 13813.29 53.96 0.00 0.00 0.00 0.00 0.00 00:09:21.158 =================================================================================================================== 00:09:21.158 Total : 13813.29 53.96 0.00 0.00 0.00 0.00 0.00 00:09:21.158 00:09:22.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.089 Nvme0n1 : 8.00 13804.62 53.92 0.00 0.00 0.00 0.00 0.00 00:09:22.089 =================================================================================================================== 00:09:22.089 Total : 13804.62 53.92 0.00 0.00 0.00 0.00 0.00 00:09:22.089 00:09:23.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.019 Nvme0n1 : 9.00 13810.33 53.95 0.00 0.00 0.00 0.00 0.00 00:09:23.019 =================================================================================================================== 00:09:23.019 Total : 13810.33 53.95 0.00 0.00 0.00 0.00 0.00 00:09:23.019 00:09:23.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.950 Nvme0n1 : 10.00 13821.30 53.99 0.00 0.00 0.00 0.00 0.00 00:09:23.950 =================================================================================================================== 00:09:23.950 Total : 13821.30 53.99 0.00 0.00 0.00 0.00 0.00 00:09:23.950 00:09:23.950 00:09:23.950 Latency(us) 00:09:23.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.951 Nvme0n1 : 10.01 13821.06 53.99 0.00 0.00 9252.87 2779.21 16214.09 00:09:23.951 =================================================================================================================== 00:09:23.951 Total : 13821.06 53.99 0.00 0.00 9252.87 2779.21 16214.09 00:09:23.951 { 00:09:23.951 "results": [ 00:09:23.951 { 00:09:23.951 "job": "Nvme0n1", 00:09:23.951 "core_mask": "0x2", 00:09:23.951 "workload": "randwrite", 00:09:23.951 "status": "finished", 00:09:23.951 "queue_depth": 128, 00:09:23.951 "io_size": 4096, 00:09:23.951 "runtime": 10.008858, 00:09:23.951 "iops": 13821.057307437071, 00:09:23.951 "mibps": 53.98850510717606, 00:09:23.951 "io_failed": 0, 00:09:23.951 "io_timeout": 0, 00:09:23.951 "avg_latency_us": 9252.873794234043, 00:09:23.951 "min_latency_us": 2779.211851851852, 00:09:23.951 "max_latency_us": 16214.091851851852 00:09:23.951 } 00:09:23.951 ], 00:09:23.951 "core_count": 1 00:09:23.951 } 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 795573 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 795573 ']' 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 795573 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 795573 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 795573' 00:09:23.951 killing process with pid 795573 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 795573 00:09:23.951 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.951 00:09:23.951 Latency(us) 00:09:23.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.951 =================================================================================================================== 00:09:23.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.951 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 795573 00:09:24.208 01:27:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.465 01:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.722 01:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:24.722 01:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.979 01:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.979 01:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:24.979 01:27:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.237 [2024-10-01 01:27:05.086903] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:25.495 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:25.753 request: 00:09:25.753 { 00:09:25.753 "uuid": "e0aa7901-9213-4bec-80ea-8d9a7c519d2f", 00:09:25.753 "method": "bdev_lvol_get_lvstores", 00:09:25.753 "req_id": 1 00:09:25.753 } 00:09:25.753 Got JSON-RPC error response 00:09:25.753 response: 00:09:25.753 { 00:09:25.753 "code": -19, 00:09:25.753 "message": "No such device" 00:09:25.753 } 00:09:25.753 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:25.753 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.753 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.753 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.753 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.011 aio_bdev 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 013b5471-32c8-42f2-bb6c-3b332bfeb20c 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=013b5471-32c8-42f2-bb6c-3b332bfeb20c 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.011 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:26.268 01:27:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 013b5471-32c8-42f2-bb6c-3b332bfeb20c -t 2000 00:09:26.526 [ 00:09:26.526 { 00:09:26.526 "name": "013b5471-32c8-42f2-bb6c-3b332bfeb20c", 00:09:26.526 "aliases": [ 00:09:26.526 "lvs/lvol" 00:09:26.526 ], 00:09:26.526 "product_name": "Logical Volume", 00:09:26.526 "block_size": 4096, 00:09:26.526 "num_blocks": 38912, 00:09:26.526 "uuid": "013b5471-32c8-42f2-bb6c-3b332bfeb20c", 00:09:26.526 "assigned_rate_limits": { 00:09:26.526 "rw_ios_per_sec": 0, 00:09:26.526 "rw_mbytes_per_sec": 0, 00:09:26.526 "r_mbytes_per_sec": 0, 00:09:26.526 "w_mbytes_per_sec": 0 00:09:26.526 }, 00:09:26.526 "claimed": false, 00:09:26.526 "zoned": false, 00:09:26.526 "supported_io_types": { 00:09:26.526 "read": true, 00:09:26.526 "write": true, 00:09:26.526 "unmap": true, 00:09:26.526 "flush": false, 00:09:26.526 "reset": true, 00:09:26.526 "nvme_admin": false, 00:09:26.526 "nvme_io": false, 00:09:26.526 "nvme_io_md": false, 00:09:26.526 "write_zeroes": true, 00:09:26.526 "zcopy": false, 00:09:26.526 "get_zone_info": false, 00:09:26.526 "zone_management": false, 00:09:26.526 "zone_append": false, 00:09:26.526 "compare": false, 00:09:26.526 "compare_and_write": false, 00:09:26.526 "abort": false, 00:09:26.526 "seek_hole": true, 00:09:26.526 "seek_data": true, 00:09:26.526 "copy": false, 00:09:26.526 "nvme_iov_md": false 00:09:26.526 }, 00:09:26.526 "driver_specific": { 00:09:26.526 "lvol": { 00:09:26.526 "lvol_store_uuid": "e0aa7901-9213-4bec-80ea-8d9a7c519d2f", 00:09:26.526 "base_bdev": "aio_bdev", 00:09:26.526 "thin_provision": false, 00:09:26.526 "num_allocated_clusters": 38, 00:09:26.526 "snapshot": false, 00:09:26.526 "clone": false, 00:09:26.526 "esnap_clone": false 00:09:26.526 } 00:09:26.526 } 00:09:26.526 } 00:09:26.526 ] 00:09:26.526 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:26.526 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:26.526 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:26.784 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:26.784 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:26.784 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:27.042 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:27.042 01:27:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 013b5471-32c8-42f2-bb6c-3b332bfeb20c 00:09:27.299 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0aa7901-9213-4bec-80ea-8d9a7c519d2f 00:09:27.557 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.815 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.073 00:09:28.073 real 0m17.847s 00:09:28.073 user 0m14.958s 00:09:28.073 sys 0m2.968s 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:28.073 ************************************ 00:09:28.073 END TEST lvs_grow_clean 00:09:28.073 ************************************ 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.073 ************************************ 00:09:28.073 START TEST lvs_grow_dirty 00:09:28.073 ************************************ 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.073 01:27:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.331 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:28.331 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:28.588 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:28.588 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:28.588 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:28.846 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:28.846 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:28.846 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 lvol 150 00:09:29.104 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fbfd849d-411a-41d2-928c-1a656078fb47 00:09:29.104 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.105 01:27:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:29.362 [2024-10-01 01:27:09.163563] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:29.362 [2024-10-01 01:27:09.163658] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:29.362 true 00:09:29.362 01:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:29.362 01:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:29.618 01:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:29.618 01:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:29.876 01:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fbfd849d-411a-41d2-928c-1a656078fb47 00:09:30.134 01:27:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:30.393 [2024-10-01 01:27:10.230854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.651 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=798386 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 798386 /var/tmp/bdevperf.sock 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 798386 ']' 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.909 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.909 [2024-10-01 01:27:10.611047] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:30.909 [2024-10-01 01:27:10.611139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798386 ] 00:09:30.909 [2024-10-01 01:27:10.673218] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.909 [2024-10-01 01:27:10.760461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.166 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.166 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:31.166 01:27:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:31.424 Nvme0n1 00:09:31.424 01:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:31.681 [ 00:09:31.681 { 00:09:31.681 "name": "Nvme0n1", 00:09:31.681 "aliases": [ 00:09:31.681 "fbfd849d-411a-41d2-928c-1a656078fb47" 00:09:31.681 ], 00:09:31.681 "product_name": "NVMe disk", 00:09:31.681 "block_size": 4096, 00:09:31.681 "num_blocks": 38912, 00:09:31.681 "uuid": "fbfd849d-411a-41d2-928c-1a656078fb47", 00:09:31.681 "numa_id": 0, 00:09:31.681 "assigned_rate_limits": { 00:09:31.681 "rw_ios_per_sec": 0, 00:09:31.681 "rw_mbytes_per_sec": 0, 00:09:31.681 "r_mbytes_per_sec": 0, 00:09:31.681 "w_mbytes_per_sec": 0 00:09:31.681 }, 00:09:31.681 "claimed": false, 00:09:31.681 "zoned": false, 00:09:31.681 "supported_io_types": { 00:09:31.681 "read": true, 00:09:31.681 "write": true, 00:09:31.681 "unmap": true, 00:09:31.681 "flush": true, 00:09:31.681 "reset": true, 00:09:31.681 "nvme_admin": true, 00:09:31.681 "nvme_io": true, 00:09:31.681 "nvme_io_md": false, 00:09:31.681 "write_zeroes": true, 00:09:31.681 "zcopy": false, 00:09:31.681 "get_zone_info": false, 00:09:31.681 "zone_management": false, 00:09:31.681 "zone_append": false, 00:09:31.681 "compare": true, 00:09:31.681 "compare_and_write": true, 00:09:31.681 "abort": true, 00:09:31.681 "seek_hole": false, 00:09:31.681 "seek_data": false, 00:09:31.681 "copy": true, 00:09:31.681 "nvme_iov_md": false 00:09:31.681 }, 00:09:31.681 "memory_domains": [ 00:09:31.681 { 00:09:31.681 "dma_device_id": "system", 00:09:31.681 "dma_device_type": 1 00:09:31.681 } 00:09:31.681 ], 00:09:31.681 "driver_specific": { 00:09:31.681 "nvme": [ 00:09:31.681 { 00:09:31.681 "trid": { 00:09:31.681 "trtype": "TCP", 00:09:31.681 "adrfam": "IPv4", 00:09:31.681 "traddr": "10.0.0.2", 00:09:31.681 "trsvcid": "4420", 00:09:31.681 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:31.681 }, 00:09:31.681 "ctrlr_data": { 00:09:31.681 "cntlid": 1, 00:09:31.681 "vendor_id": "0x8086", 00:09:31.681 "model_number": "SPDK bdev Controller", 00:09:31.681 "serial_number": "SPDK0", 00:09:31.681 "firmware_revision": "25.01", 00:09:31.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.681 "oacs": { 00:09:31.681 "security": 0, 00:09:31.681 "format": 0, 00:09:31.681 "firmware": 0, 00:09:31.681 "ns_manage": 0 00:09:31.681 }, 00:09:31.681 "multi_ctrlr": true, 00:09:31.681 "ana_reporting": false 00:09:31.681 }, 00:09:31.681 "vs": { 00:09:31.681 "nvme_version": "1.3" 00:09:31.681 }, 00:09:31.681 "ns_data": { 00:09:31.681 "id": 1, 00:09:31.681 "can_share": true 00:09:31.681 } 00:09:31.681 } 00:09:31.681 ], 00:09:31.681 "mp_policy": "active_passive" 00:09:31.681 } 00:09:31.681 } 00:09:31.681 ] 00:09:31.681 01:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=798520 00:09:31.681 01:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:31.681 01:27:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.939 Running I/O for 10 seconds... 00:09:32.871 Latency(us) 00:09:32.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.871 Nvme0n1 : 1.00 14420.00 56.33 0.00 0.00 0.00 0.00 0.00 00:09:32.871 =================================================================================================================== 00:09:32.871 Total : 14420.00 56.33 0.00 0.00 0.00 0.00 0.00 00:09:32.871 00:09:33.804 01:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:33.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.804 Nvme0n1 : 2.00 14515.50 56.70 0.00 0.00 0.00 0.00 0.00 00:09:33.804 =================================================================================================================== 00:09:33.804 Total : 14515.50 56.70 0.00 0.00 0.00 0.00 0.00 00:09:33.804 00:09:34.063 true 00:09:34.063 01:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:34.063 01:27:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:34.345 01:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:34.345 01:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:34.345 01:27:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 798520 00:09:34.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.914 Nvme0n1 : 3.00 14588.67 56.99 0.00 0.00 0.00 0.00 0.00 00:09:34.914 =================================================================================================================== 00:09:34.914 Total : 14588.67 56.99 0.00 0.00 0.00 0.00 0.00 00:09:34.914 00:09:35.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.847 Nvme0n1 : 4.00 14719.75 57.50 0.00 0.00 0.00 0.00 0.00 00:09:35.847 =================================================================================================================== 00:09:35.847 Total : 14719.75 57.50 0.00 0.00 0.00 0.00 0.00 00:09:35.847 00:09:37.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.219 Nvme0n1 : 5.00 14762.40 57.67 0.00 0.00 0.00 0.00 0.00 00:09:37.219 =================================================================================================================== 00:09:37.219 Total : 14762.40 57.67 0.00 0.00 0.00 0.00 0.00 00:09:37.219 00:09:38.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.150 Nvme0n1 : 6.00 14799.67 57.81 0.00 0.00 0.00 0.00 0.00 00:09:38.150 =================================================================================================================== 00:09:38.150 Total : 14799.67 57.81 0.00 0.00 0.00 0.00 0.00 00:09:38.150 00:09:39.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.082 Nvme0n1 : 7.00 14845.29 57.99 0.00 0.00 0.00 0.00 0.00 00:09:39.082 =================================================================================================================== 00:09:39.082 Total : 14845.29 57.99 0.00 0.00 0.00 0.00 0.00 00:09:39.082 00:09:40.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.016 Nvme0n1 : 8.00 14878.75 58.12 0.00 0.00 0.00 0.00 0.00 00:09:40.016 =================================================================================================================== 00:09:40.016 Total : 14878.75 58.12 0.00 0.00 0.00 0.00 0.00 00:09:40.016 00:09:40.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.948 Nvme0n1 : 9.00 14890.89 58.17 0.00 0.00 0.00 0.00 0.00 00:09:40.949 =================================================================================================================== 00:09:40.949 Total : 14890.89 58.17 0.00 0.00 0.00 0.00 0.00 00:09:40.949 00:09:41.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.882 Nvme0n1 : 10.00 14907.10 58.23 0.00 0.00 0.00 0.00 0.00 00:09:41.882 =================================================================================================================== 00:09:41.882 Total : 14907.10 58.23 0.00 0.00 0.00 0.00 0.00 00:09:41.882 00:09:41.882 00:09:41.882 Latency(us) 00:09:41.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.882 Nvme0n1 : 10.01 14911.69 58.25 0.00 0.00 8579.05 5145.79 17282.09 00:09:41.882 =================================================================================================================== 00:09:41.882 Total : 14911.69 58.25 0.00 0.00 8579.05 5145.79 17282.09 00:09:41.882 { 00:09:41.882 "results": [ 00:09:41.882 { 00:09:41.882 "job": "Nvme0n1", 00:09:41.882 "core_mask": "0x2", 00:09:41.882 "workload": "randwrite", 00:09:41.882 "status": "finished", 00:09:41.882 "queue_depth": 128, 00:09:41.882 "io_size": 4096, 00:09:41.882 "runtime": 10.005507, 00:09:41.882 "iops": 14911.688133344967, 00:09:41.882 "mibps": 58.24878177087878, 00:09:41.882 "io_failed": 0, 00:09:41.882 "io_timeout": 0, 00:09:41.882 "avg_latency_us": 8579.049337327997, 00:09:41.882 "min_latency_us": 5145.789629629629, 00:09:41.882 "max_latency_us": 17282.085925925927 00:09:41.882 } 00:09:41.882 ], 00:09:41.882 "core_count": 1 00:09:41.882 } 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 798386 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 798386 ']' 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 798386 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 798386 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 798386' 00:09:41.882 killing process with pid 798386 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 798386 00:09:41.882 Received shutdown signal, test time was about 10.000000 seconds 00:09:41.882 00:09:41.882 Latency(us) 00:09:41.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.882 =================================================================================================================== 00:09:41.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:41.882 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 798386 00:09:42.140 01:27:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.398 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:42.962 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:42.962 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 795133 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 795133 00:09:43.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 795133 Killed "${NVMF_APP[@]}" "$@" 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=799861 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 799861 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 799861 ']' 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.221 01:27:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:43.221 [2024-10-01 01:27:22.903338] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:43.221 [2024-10-01 01:27:22.903447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.221 [2024-10-01 01:27:22.970322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.221 [2024-10-01 01:27:23.056729] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.221 [2024-10-01 01:27:23.056814] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.221 [2024-10-01 01:27:23.056843] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.221 [2024-10-01 01:27:23.056854] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.221 [2024-10-01 01:27:23.056863] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.221 [2024-10-01 01:27:23.056889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.478 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.478 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:43.478 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:43.478 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.478 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:43.478 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.479 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.736 [2024-10-01 01:27:23.449111] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:43.736 [2024-10-01 01:27:23.449245] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:43.736 [2024-10-01 01:27:23.449314] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fbfd849d-411a-41d2-928c-1a656078fb47 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=fbfd849d-411a-41d2-928c-1a656078fb47 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.736 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.994 01:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fbfd849d-411a-41d2-928c-1a656078fb47 -t 2000 00:09:44.252 [ 00:09:44.252 { 00:09:44.252 "name": "fbfd849d-411a-41d2-928c-1a656078fb47", 00:09:44.252 "aliases": [ 00:09:44.252 "lvs/lvol" 00:09:44.252 ], 00:09:44.252 "product_name": "Logical Volume", 00:09:44.252 "block_size": 4096, 00:09:44.252 "num_blocks": 38912, 00:09:44.252 "uuid": "fbfd849d-411a-41d2-928c-1a656078fb47", 00:09:44.252 "assigned_rate_limits": { 00:09:44.252 "rw_ios_per_sec": 0, 00:09:44.252 "rw_mbytes_per_sec": 0, 00:09:44.252 "r_mbytes_per_sec": 0, 00:09:44.252 "w_mbytes_per_sec": 0 00:09:44.252 }, 00:09:44.252 "claimed": false, 00:09:44.252 "zoned": false, 00:09:44.252 "supported_io_types": { 00:09:44.252 "read": true, 00:09:44.252 "write": true, 00:09:44.252 "unmap": true, 00:09:44.252 "flush": false, 00:09:44.252 "reset": true, 00:09:44.252 "nvme_admin": false, 00:09:44.252 "nvme_io": false, 00:09:44.252 "nvme_io_md": false, 00:09:44.252 "write_zeroes": true, 00:09:44.252 "zcopy": false, 00:09:44.252 "get_zone_info": false, 00:09:44.252 "zone_management": false, 00:09:44.252 "zone_append": false, 00:09:44.252 "compare": false, 00:09:44.252 "compare_and_write": false, 00:09:44.252 "abort": false, 00:09:44.252 "seek_hole": true, 00:09:44.252 "seek_data": true, 00:09:44.252 "copy": false, 00:09:44.252 "nvme_iov_md": false 00:09:44.252 }, 00:09:44.252 "driver_specific": { 00:09:44.252 "lvol": { 00:09:44.252 "lvol_store_uuid": "5af2258d-2c22-405d-bd24-d9bfbd3a1d15", 00:09:44.252 "base_bdev": "aio_bdev", 00:09:44.252 "thin_provision": false, 00:09:44.252 "num_allocated_clusters": 38, 00:09:44.252 "snapshot": false, 00:09:44.252 "clone": false, 00:09:44.252 "esnap_clone": false 00:09:44.252 } 00:09:44.252 } 00:09:44.252 } 00:09:44.252 ] 00:09:44.252 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:44.252 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:44.252 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:44.510 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:44.510 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:44.510 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:44.768 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:44.768 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.025 [2024-10-01 01:27:24.810248] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:45.025 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:45.026 01:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:45.283 request: 00:09:45.283 { 00:09:45.283 "uuid": "5af2258d-2c22-405d-bd24-d9bfbd3a1d15", 00:09:45.283 "method": "bdev_lvol_get_lvstores", 00:09:45.283 "req_id": 1 00:09:45.283 } 00:09:45.283 Got JSON-RPC error response 00:09:45.283 response: 00:09:45.283 { 00:09:45.283 "code": -19, 00:09:45.283 "message": "No such device" 00:09:45.283 } 00:09:45.283 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:45.283 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.283 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:45.283 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.283 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:45.541 aio_bdev 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fbfd849d-411a-41d2-928c-1a656078fb47 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=fbfd849d-411a-41d2-928c-1a656078fb47 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.541 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:45.799 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fbfd849d-411a-41d2-928c-1a656078fb47 -t 2000 00:09:46.364 [ 00:09:46.364 { 00:09:46.364 "name": "fbfd849d-411a-41d2-928c-1a656078fb47", 00:09:46.364 "aliases": [ 00:09:46.364 "lvs/lvol" 00:09:46.364 ], 00:09:46.364 "product_name": "Logical Volume", 00:09:46.364 "block_size": 4096, 00:09:46.364 "num_blocks": 38912, 00:09:46.364 "uuid": "fbfd849d-411a-41d2-928c-1a656078fb47", 00:09:46.364 "assigned_rate_limits": { 00:09:46.364 "rw_ios_per_sec": 0, 00:09:46.364 "rw_mbytes_per_sec": 0, 00:09:46.364 "r_mbytes_per_sec": 0, 00:09:46.364 "w_mbytes_per_sec": 0 00:09:46.364 }, 00:09:46.364 "claimed": false, 00:09:46.364 "zoned": false, 00:09:46.364 "supported_io_types": { 00:09:46.364 "read": true, 00:09:46.364 "write": true, 00:09:46.364 "unmap": true, 00:09:46.364 "flush": false, 00:09:46.364 "reset": true, 00:09:46.364 "nvme_admin": false, 00:09:46.364 "nvme_io": false, 00:09:46.364 "nvme_io_md": false, 00:09:46.364 "write_zeroes": true, 00:09:46.364 "zcopy": false, 00:09:46.364 "get_zone_info": false, 00:09:46.364 "zone_management": false, 00:09:46.364 "zone_append": false, 00:09:46.364 "compare": false, 00:09:46.364 "compare_and_write": false, 00:09:46.364 "abort": false, 00:09:46.364 "seek_hole": true, 00:09:46.364 "seek_data": true, 00:09:46.364 "copy": false, 00:09:46.364 "nvme_iov_md": false 00:09:46.364 }, 00:09:46.364 "driver_specific": { 00:09:46.364 "lvol": { 00:09:46.364 "lvol_store_uuid": "5af2258d-2c22-405d-bd24-d9bfbd3a1d15", 00:09:46.364 "base_bdev": "aio_bdev", 00:09:46.364 "thin_provision": false, 00:09:46.364 "num_allocated_clusters": 38, 00:09:46.364 "snapshot": false, 00:09:46.364 "clone": false, 00:09:46.364 "esnap_clone": false 00:09:46.364 } 00:09:46.364 } 00:09:46.364 } 00:09:46.364 ] 00:09:46.364 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:46.364 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:46.364 01:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:46.622 01:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:46.622 01:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:46.622 01:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:46.880 01:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:46.880 01:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fbfd849d-411a-41d2-928c-1a656078fb47 00:09:47.137 01:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5af2258d-2c22-405d-bd24-d9bfbd3a1d15 00:09:47.395 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:47.653 00:09:47.653 real 0m19.694s 00:09:47.653 user 0m49.560s 00:09:47.653 sys 0m4.710s 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:47.653 ************************************ 00:09:47.653 END TEST lvs_grow_dirty 00:09:47.653 ************************************ 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:47.653 nvmf_trace.0 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.653 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.653 rmmod nvme_tcp 00:09:47.653 rmmod nvme_fabrics 00:09:47.911 rmmod nvme_keyring 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 799861 ']' 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 799861 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 799861 ']' 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 799861 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 799861 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 799861' 00:09:47.911 killing process with pid 799861 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 799861 00:09:47.911 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 799861 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.170 01:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.072 00:09:50.072 real 0m43.075s 00:09:50.072 user 1m10.663s 00:09:50.072 sys 0m9.687s 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:50.072 ************************************ 00:09:50.072 END TEST nvmf_lvs_grow 00:09:50.072 ************************************ 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.072 ************************************ 00:09:50.072 START TEST nvmf_bdev_io_wait 00:09:50.072 ************************************ 00:09:50.072 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:50.329 * Looking for test storage... 00:09:50.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.329 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.329 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.329 01:27:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.330 --rc genhtml_branch_coverage=1 00:09:50.330 --rc genhtml_function_coverage=1 00:09:50.330 --rc genhtml_legend=1 00:09:50.330 --rc geninfo_all_blocks=1 00:09:50.330 --rc geninfo_unexecuted_blocks=1 00:09:50.330 00:09:50.330 ' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.330 --rc genhtml_branch_coverage=1 00:09:50.330 --rc genhtml_function_coverage=1 00:09:50.330 --rc genhtml_legend=1 00:09:50.330 --rc geninfo_all_blocks=1 00:09:50.330 --rc geninfo_unexecuted_blocks=1 00:09:50.330 00:09:50.330 ' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.330 --rc genhtml_branch_coverage=1 00:09:50.330 --rc genhtml_function_coverage=1 00:09:50.330 --rc genhtml_legend=1 00:09:50.330 --rc geninfo_all_blocks=1 00:09:50.330 --rc geninfo_unexecuted_blocks=1 00:09:50.330 00:09:50.330 ' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.330 --rc genhtml_branch_coverage=1 00:09:50.330 --rc genhtml_function_coverage=1 00:09:50.330 --rc genhtml_legend=1 00:09:50.330 --rc geninfo_all_blocks=1 00:09:50.330 --rc geninfo_unexecuted_blocks=1 00:09:50.330 00:09:50.330 ' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.330 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.331 01:27:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:52.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:52.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:52.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:52.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.227 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:52.485 00:09:52.485 --- 10.0.0.2 ping statistics --- 00:09:52.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.485 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:09:52.485 00:09:52.485 --- 10.0.0.1 ping statistics --- 00:09:52.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.485 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.485 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=802397 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 802397 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 802397 ']' 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.486 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.486 [2024-10-01 01:27:32.266716] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:52.486 [2024-10-01 01:27:32.266802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.744 [2024-10-01 01:27:32.343241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.744 [2024-10-01 01:27:32.435250] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.744 [2024-10-01 01:27:32.435319] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.744 [2024-10-01 01:27:32.435340] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.744 [2024-10-01 01:27:32.435354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.744 [2024-10-01 01:27:32.435366] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.744 [2024-10-01 01:27:32.435421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.744 [2024-10-01 01:27:32.435494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.744 [2024-10-01 01:27:32.435599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.744 [2024-10-01 01:27:32.435601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.744 [2024-10-01 01:27:32.589726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.744 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.002 Malloc0 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.002 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.003 [2024-10-01 01:27:32.651962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=802520 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=802523 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:53.003 { 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme$subsystem", 00:09:53.003 "trtype": "$TEST_TRANSPORT", 00:09:53.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "$NVMF_PORT", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.003 "hdgst": ${hdgst:-false}, 00:09:53.003 "ddgst": ${ddgst:-false} 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 } 00:09:53.003 EOF 00:09:53.003 )") 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=802526 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:53.003 { 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme$subsystem", 00:09:53.003 "trtype": "$TEST_TRANSPORT", 00:09:53.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "$NVMF_PORT", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.003 "hdgst": ${hdgst:-false}, 00:09:53.003 "ddgst": ${ddgst:-false} 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 } 00:09:53.003 EOF 00:09:53.003 )") 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=802530 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:53.003 { 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme$subsystem", 00:09:53.003 "trtype": "$TEST_TRANSPORT", 00:09:53.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "$NVMF_PORT", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.003 "hdgst": ${hdgst:-false}, 00:09:53.003 "ddgst": ${ddgst:-false} 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 } 00:09:53.003 EOF 00:09:53.003 )") 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:53.003 { 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme$subsystem", 00:09:53.003 "trtype": "$TEST_TRANSPORT", 00:09:53.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "$NVMF_PORT", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.003 "hdgst": ${hdgst:-false}, 00:09:53.003 "ddgst": ${ddgst:-false} 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 } 00:09:53.003 EOF 00:09:53.003 )") 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 802520 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme1", 00:09:53.003 "trtype": "tcp", 00:09:53.003 "traddr": "10.0.0.2", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "4420", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.003 "hdgst": false, 00:09:53.003 "ddgst": false 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 }' 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme1", 00:09:53.003 "trtype": "tcp", 00:09:53.003 "traddr": "10.0.0.2", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "4420", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.003 "hdgst": false, 00:09:53.003 "ddgst": false 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 }' 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme1", 00:09:53.003 "trtype": "tcp", 00:09:53.003 "traddr": "10.0.0.2", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "4420", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.003 "hdgst": false, 00:09:53.003 "ddgst": false 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 }' 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:53.003 01:27:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:53.003 "params": { 00:09:53.003 "name": "Nvme1", 00:09:53.003 "trtype": "tcp", 00:09:53.003 "traddr": "10.0.0.2", 00:09:53.003 "adrfam": "ipv4", 00:09:53.003 "trsvcid": "4420", 00:09:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.003 "hdgst": false, 00:09:53.003 "ddgst": false 00:09:53.003 }, 00:09:53.003 "method": "bdev_nvme_attach_controller" 00:09:53.003 }' 00:09:53.003 [2024-10-01 01:27:32.703332] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:53.004 [2024-10-01 01:27:32.703332] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:53.004 [2024-10-01 01:27:32.703332] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:53.004 [2024-10-01 01:27:32.703414] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-01 01:27:32.703414] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-01 01:27:32.703414] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:53.004 --proc-type=auto ] 00:09:53.004 --proc-type=auto ] 00:09:53.004 [2024-10-01 01:27:32.703774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:53.004 [2024-10-01 01:27:32.703845] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:53.261 [2024-10-01 01:27:32.876747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.261 [2024-10-01 01:27:32.951432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.262 [2024-10-01 01:27:32.975202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.262 [2024-10-01 01:27:33.049703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:53.262 [2024-10-01 01:27:33.074475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.520 [2024-10-01 01:27:33.150163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.520 [2024-10-01 01:27:33.174103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.520 [2024-10-01 01:27:33.253473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.778 Running I/O for 1 seconds... 00:09:53.778 Running I/O for 1 seconds... 00:09:53.778 Running I/O for 1 seconds... 00:09:54.036 Running I/O for 1 seconds... 00:09:54.602 11240.00 IOPS, 43.91 MiB/s 00:09:54.602 Latency(us) 00:09:54.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.602 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:54.602 Nvme1n1 : 1.01 11299.02 44.14 0.00 0.00 11283.73 5437.06 19223.89 00:09:54.602 =================================================================================================================== 00:09:54.602 Total : 11299.02 44.14 0.00 0.00 11283.73 5437.06 19223.89 00:09:54.602 5284.00 IOPS, 20.64 MiB/s 00:09:54.602 Latency(us) 00:09:54.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.602 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:54.602 Nvme1n1 : 1.02 5310.46 20.74 0.00 0.00 23809.48 7281.78 42331.40 00:09:54.602 =================================================================================================================== 00:09:54.602 Total : 5310.46 20.74 0.00 0.00 23809.48 7281.78 42331.40 00:09:54.859 5290.00 IOPS, 20.66 MiB/s 00:09:54.859 Latency(us) 00:09:54.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.860 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:54.860 Nvme1n1 : 1.01 5375.47 21.00 0.00 0.00 23699.58 7330.32 48351.00 00:09:54.860 =================================================================================================================== 00:09:54.860 Total : 5375.47 21.00 0.00 0.00 23699.58 7330.32 48351.00 00:09:54.860 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 802523 00:09:54.860 192064.00 IOPS, 750.25 MiB/s 00:09:54.860 Latency(us) 00:09:54.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.860 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:54.860 Nvme1n1 : 1.00 191696.47 748.81 0.00 0.00 664.07 304.92 1917.53 00:09:54.860 =================================================================================================================== 00:09:54.860 Total : 191696.47 748.81 0.00 0.00 664.07 304.92 1917.53 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 802526 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 802530 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.117 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.117 rmmod nvme_tcp 00:09:55.117 rmmod nvme_fabrics 00:09:55.117 rmmod nvme_keyring 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 802397 ']' 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 802397 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 802397 ']' 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 802397 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.375 01:27:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 802397 00:09:55.375 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.375 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.375 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 802397' 00:09:55.375 killing process with pid 802397 00:09:55.375 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 802397 00:09:55.375 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 802397 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.633 01:27:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.534 00:09:57.534 real 0m7.398s 00:09:57.534 user 0m16.121s 00:09:57.534 sys 0m3.998s 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.534 ************************************ 00:09:57.534 END TEST nvmf_bdev_io_wait 00:09:57.534 ************************************ 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.534 ************************************ 00:09:57.534 START TEST nvmf_queue_depth 00:09:57.534 ************************************ 00:09:57.534 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:57.792 * Looking for test storage... 00:09:57.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:57.792 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.793 --rc genhtml_branch_coverage=1 00:09:57.793 --rc genhtml_function_coverage=1 00:09:57.793 --rc genhtml_legend=1 00:09:57.793 --rc geninfo_all_blocks=1 00:09:57.793 --rc geninfo_unexecuted_blocks=1 00:09:57.793 00:09:57.793 ' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.793 --rc genhtml_branch_coverage=1 00:09:57.793 --rc genhtml_function_coverage=1 00:09:57.793 --rc genhtml_legend=1 00:09:57.793 --rc geninfo_all_blocks=1 00:09:57.793 --rc geninfo_unexecuted_blocks=1 00:09:57.793 00:09:57.793 ' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.793 --rc genhtml_branch_coverage=1 00:09:57.793 --rc genhtml_function_coverage=1 00:09:57.793 --rc genhtml_legend=1 00:09:57.793 --rc geninfo_all_blocks=1 00:09:57.793 --rc geninfo_unexecuted_blocks=1 00:09:57.793 00:09:57.793 ' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.793 --rc genhtml_branch_coverage=1 00:09:57.793 --rc genhtml_function_coverage=1 00:09:57.793 --rc genhtml_legend=1 00:09:57.793 --rc geninfo_all_blocks=1 00:09:57.793 --rc geninfo_unexecuted_blocks=1 00:09:57.793 00:09:57.793 ' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.793 01:27:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:59.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:59.694 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.694 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:59.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:59.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.695 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:09:59.952 00:09:59.952 --- 10.0.0.2 ping statistics --- 00:09:59.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.952 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:09:59.952 00:09:59.952 --- 10.0.0.1 ping statistics --- 00:09:59.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.952 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=804787 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 804787 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 804787 ']' 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.952 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:59.952 [2024-10-01 01:27:39.714026] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:59.952 [2024-10-01 01:27:39.714112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.952 [2024-10-01 01:27:39.784466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.210 [2024-10-01 01:27:39.874033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.210 [2024-10-01 01:27:39.874102] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.210 [2024-10-01 01:27:39.874119] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.210 [2024-10-01 01:27:39.874132] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.210 [2024-10-01 01:27:39.874144] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.210 [2024-10-01 01:27:39.874183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.210 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.210 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:00.210 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:00.210 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.210 01:27:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.210 [2024-10-01 01:27:40.023313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.210 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.468 Malloc0 00:10:00.468 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.468 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.468 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.468 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.468 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.469 [2024-10-01 01:27:40.089773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=804807 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 804807 /var/tmp/bdevperf.sock 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 804807 ']' 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.469 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.469 [2024-10-01 01:27:40.142633] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:00.469 [2024-10-01 01:27:40.142708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804807 ] 00:10:00.469 [2024-10-01 01:27:40.213816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.469 [2024-10-01 01:27:40.306951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.769 NVMe0n1 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.769 01:27:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:01.053 Running I/O for 10 seconds... 00:10:11.092 7195.00 IOPS, 28.11 MiB/s 7680.00 IOPS, 30.00 MiB/s 7679.33 IOPS, 30.00 MiB/s 7682.50 IOPS, 30.01 MiB/s 7771.60 IOPS, 30.36 MiB/s 7707.50 IOPS, 30.11 MiB/s 7753.29 IOPS, 30.29 MiB/s 7804.50 IOPS, 30.49 MiB/s 7797.11 IOPS, 30.46 MiB/s 7781.80 IOPS, 30.40 MiB/s 00:10:11.092 Latency(us) 00:10:11.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.092 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:11.092 Verification LBA range: start 0x0 length 0x4000 00:10:11.092 NVMe0n1 : 10.08 7823.98 30.56 0.00 0.00 130317.90 16893.72 78837.38 00:10:11.092 =================================================================================================================== 00:10:11.092 Total : 7823.98 30.56 0.00 0.00 130317.90 16893.72 78837.38 00:10:11.092 { 00:10:11.092 "results": [ 00:10:11.092 { 00:10:11.092 "job": "NVMe0n1", 00:10:11.092 "core_mask": "0x1", 00:10:11.092 "workload": "verify", 00:10:11.092 "status": "finished", 00:10:11.092 "verify_range": { 00:10:11.092 "start": 0, 00:10:11.092 "length": 16384 00:10:11.092 }, 00:10:11.092 "queue_depth": 1024, 00:10:11.092 "io_size": 4096, 00:10:11.092 "runtime": 10.076963, 00:10:11.092 "iops": 7823.984269863847, 00:10:11.092 "mibps": 30.56243855415565, 00:10:11.092 "io_failed": 0, 00:10:11.092 "io_timeout": 0, 00:10:11.092 "avg_latency_us": 130317.903174375, 00:10:11.092 "min_latency_us": 16893.724444444444, 00:10:11.092 "max_latency_us": 78837.38074074074 00:10:11.092 } 00:10:11.092 ], 00:10:11.092 "core_count": 1 00:10:11.092 } 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 804807 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 804807 ']' 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 804807 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 804807 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 804807' 00:10:11.092 killing process with pid 804807 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 804807 00:10:11.092 Received shutdown signal, test time was about 10.000000 seconds 00:10:11.092 00:10:11.092 Latency(us) 00:10:11.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.092 =================================================================================================================== 00:10:11.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:11.092 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 804807 00:10:11.351 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:11.351 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:11.351 01:27:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.351 rmmod nvme_tcp 00:10:11.351 rmmod nvme_fabrics 00:10:11.351 rmmod nvme_keyring 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 804787 ']' 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 804787 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 804787 ']' 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 804787 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 804787 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 804787' 00:10:11.351 killing process with pid 804787 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 804787 00:10:11.351 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 804787 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.610 01:27:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.144 00:10:14.144 real 0m16.084s 00:10:14.144 user 0m21.668s 00:10:14.144 sys 0m3.534s 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.144 ************************************ 00:10:14.144 END TEST nvmf_queue_depth 00:10:14.144 ************************************ 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.144 ************************************ 00:10:14.144 START TEST nvmf_target_multipath 00:10:14.144 ************************************ 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:14.144 * Looking for test storage... 00:10:14.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:14.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.144 --rc genhtml_branch_coverage=1 00:10:14.144 --rc genhtml_function_coverage=1 00:10:14.144 --rc genhtml_legend=1 00:10:14.144 --rc geninfo_all_blocks=1 00:10:14.144 --rc geninfo_unexecuted_blocks=1 00:10:14.144 00:10:14.144 ' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:14.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.144 --rc genhtml_branch_coverage=1 00:10:14.144 --rc genhtml_function_coverage=1 00:10:14.144 --rc genhtml_legend=1 00:10:14.144 --rc geninfo_all_blocks=1 00:10:14.144 --rc geninfo_unexecuted_blocks=1 00:10:14.144 00:10:14.144 ' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:14.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.144 --rc genhtml_branch_coverage=1 00:10:14.144 --rc genhtml_function_coverage=1 00:10:14.144 --rc genhtml_legend=1 00:10:14.144 --rc geninfo_all_blocks=1 00:10:14.144 --rc geninfo_unexecuted_blocks=1 00:10:14.144 00:10:14.144 ' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:14.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.144 --rc genhtml_branch_coverage=1 00:10:14.144 --rc genhtml_function_coverage=1 00:10:14.144 --rc genhtml_legend=1 00:10:14.144 --rc geninfo_all_blocks=1 00:10:14.144 --rc geninfo_unexecuted_blocks=1 00:10:14.144 00:10:14.144 ' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.144 01:27:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.043 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:16.044 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:16.044 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:16.044 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:16.044 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:10:16.044 00:10:16.044 --- 10.0.0.2 ping statistics --- 00:10:16.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.044 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:10:16.044 00:10:16.044 --- 10.0.0.1 ping statistics --- 00:10:16.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.044 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:16.044 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:16.303 only one NIC for nvmf test 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.303 rmmod nvme_tcp 00:10:16.303 rmmod nvme_fabrics 00:10:16.303 rmmod nvme_keyring 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:16.303 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.304 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.304 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.304 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.304 01:27:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.208 00:10:18.208 real 0m4.554s 00:10:18.208 user 0m0.928s 00:10:18.208 sys 0m1.610s 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.208 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:18.208 ************************************ 00:10:18.208 END TEST nvmf_target_multipath 00:10:18.208 ************************************ 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.466 ************************************ 00:10:18.466 START TEST nvmf_zcopy 00:10:18.466 ************************************ 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:18.466 * Looking for test storage... 00:10:18.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:18.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.466 --rc genhtml_branch_coverage=1 00:10:18.466 --rc genhtml_function_coverage=1 00:10:18.466 --rc genhtml_legend=1 00:10:18.466 --rc geninfo_all_blocks=1 00:10:18.466 --rc geninfo_unexecuted_blocks=1 00:10:18.466 00:10:18.466 ' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:18.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.466 --rc genhtml_branch_coverage=1 00:10:18.466 --rc genhtml_function_coverage=1 00:10:18.466 --rc genhtml_legend=1 00:10:18.466 --rc geninfo_all_blocks=1 00:10:18.466 --rc geninfo_unexecuted_blocks=1 00:10:18.466 00:10:18.466 ' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:18.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.466 --rc genhtml_branch_coverage=1 00:10:18.466 --rc genhtml_function_coverage=1 00:10:18.466 --rc genhtml_legend=1 00:10:18.466 --rc geninfo_all_blocks=1 00:10:18.466 --rc geninfo_unexecuted_blocks=1 00:10:18.466 00:10:18.466 ' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:18.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.466 --rc genhtml_branch_coverage=1 00:10:18.466 --rc genhtml_function_coverage=1 00:10:18.466 --rc genhtml_legend=1 00:10:18.466 --rc geninfo_all_blocks=1 00:10:18.466 --rc geninfo_unexecuted_blocks=1 00:10:18.466 00:10:18.466 ' 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.466 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.467 01:27:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:20.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:20.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:20.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:20.992 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:20.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:10:20.993 00:10:20.993 --- 10.0.0.2 ping statistics --- 00:10:20.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.993 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:10:20.993 00:10:20.993 --- 10.0.0.1 ping statistics --- 00:10:20.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.993 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=810015 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 810015 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 810015 ']' 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.993 [2024-10-01 01:28:00.553528] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:20.993 [2024-10-01 01:28:00.553650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.993 [2024-10-01 01:28:00.624853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.993 [2024-10-01 01:28:00.720485] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.993 [2024-10-01 01:28:00.720547] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.993 [2024-10-01 01:28:00.720563] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.993 [2024-10-01 01:28:00.720575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.993 [2024-10-01 01:28:00.720585] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.993 [2024-10-01 01:28:00.720625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.993 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 [2024-10-01 01:28:00.867360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 [2024-10-01 01:28:00.883611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 malloc0 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:21.251 { 00:10:21.251 "params": { 00:10:21.251 "name": "Nvme$subsystem", 00:10:21.251 "trtype": "$TEST_TRANSPORT", 00:10:21.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.251 "adrfam": "ipv4", 00:10:21.251 "trsvcid": "$NVMF_PORT", 00:10:21.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.251 "hdgst": ${hdgst:-false}, 00:10:21.251 "ddgst": ${ddgst:-false} 00:10:21.251 }, 00:10:21.251 "method": "bdev_nvme_attach_controller" 00:10:21.251 } 00:10:21.251 EOF 00:10:21.251 )") 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:21.251 01:28:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:21.251 "params": { 00:10:21.251 "name": "Nvme1", 00:10:21.251 "trtype": "tcp", 00:10:21.251 "traddr": "10.0.0.2", 00:10:21.251 "adrfam": "ipv4", 00:10:21.251 "trsvcid": "4420", 00:10:21.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.251 "hdgst": false, 00:10:21.251 "ddgst": false 00:10:21.251 }, 00:10:21.251 "method": "bdev_nvme_attach_controller" 00:10:21.251 }' 00:10:21.252 [2024-10-01 01:28:00.984588] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:21.252 [2024-10-01 01:28:00.984663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810051 ] 00:10:21.252 [2024-10-01 01:28:01.051834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.509 [2024-10-01 01:28:01.146065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.766 Running I/O for 10 seconds... 00:10:31.812 5556.00 IOPS, 43.41 MiB/s 5584.50 IOPS, 43.63 MiB/s 5614.33 IOPS, 43.86 MiB/s 5613.00 IOPS, 43.85 MiB/s 5624.00 IOPS, 43.94 MiB/s 5632.83 IOPS, 44.01 MiB/s 5638.29 IOPS, 44.05 MiB/s 5635.50 IOPS, 44.03 MiB/s 5638.56 IOPS, 44.05 MiB/s 5636.70 IOPS, 44.04 MiB/s 00:10:31.812 Latency(us) 00:10:31.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.812 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:31.812 Verification LBA range: start 0x0 length 0x1000 00:10:31.812 Nvme1n1 : 10.02 5638.59 44.05 0.00 0.00 22639.68 4029.25 31263.10 00:10:31.812 =================================================================================================================== 00:10:31.812 Total : 5638.59 44.05 0.00 0.00 22639.68 4029.25 31263.10 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=811364 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:31.812 { 00:10:31.812 "params": { 00:10:31.812 "name": "Nvme$subsystem", 00:10:31.812 "trtype": "$TEST_TRANSPORT", 00:10:31.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:31.812 "adrfam": "ipv4", 00:10:31.812 "trsvcid": "$NVMF_PORT", 00:10:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:31.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:31.812 "hdgst": ${hdgst:-false}, 00:10:31.812 "ddgst": ${ddgst:-false} 00:10:31.812 }, 00:10:31.812 "method": "bdev_nvme_attach_controller" 00:10:31.812 } 00:10:31.812 EOF 00:10:31.812 )") 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:31.812 [2024-10-01 01:28:11.663612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.812 [2024-10-01 01:28:11.663652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:31.812 01:28:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:31.812 "params": { 00:10:31.812 "name": "Nvme1", 00:10:31.812 "trtype": "tcp", 00:10:31.812 "traddr": "10.0.0.2", 00:10:31.812 "adrfam": "ipv4", 00:10:31.812 "trsvcid": "4420", 00:10:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:31.812 "hdgst": false, 00:10:31.812 "ddgst": false 00:10:31.812 }, 00:10:31.812 "method": "bdev_nvme_attach_controller" 00:10:31.812 }' 00:10:32.070 [2024-10-01 01:28:11.671573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.070 [2024-10-01 01:28:11.671594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.070 [2024-10-01 01:28:11.679592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.070 [2024-10-01 01:28:11.679612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.070 [2024-10-01 01:28:11.687613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.070 [2024-10-01 01:28:11.687633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.070 [2024-10-01 01:28:11.695636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.070 [2024-10-01 01:28:11.695656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.070 [2024-10-01 01:28:11.700797] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:32.070 [2024-10-01 01:28:11.700871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811364 ] 00:10:32.070 [2024-10-01 01:28:11.703656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.703676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.711684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.711704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.719705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.719724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.727726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.727746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.735766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.735790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.743788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.743813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.751809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.751834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.759835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.759859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.766259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.071 [2024-10-01 01:28:11.767858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.767882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.775906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.775948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.783919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.783955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.791927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.791953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.799948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.799974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.807971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.808003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.815995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.816028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.824057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.824091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.832058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.832083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.840079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.840099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.848091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.848112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.856107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.856128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.862331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.071 [2024-10-01 01:28:11.864118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.864139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.872139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.872159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.880175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.880206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.888201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.888237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.896223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.896255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.904246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.904315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.912276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.912328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.071 [2024-10-01 01:28:11.920319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.071 [2024-10-01 01:28:11.920358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.928332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.928385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.936330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.936355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.944372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.944412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.952397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.952437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.960418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.960456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.968419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.968443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.976439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.976473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.984475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.984506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:11.992495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:11.992524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:12.000516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:12.000543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:12.008541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:12.008570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:12.016563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:12.016590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.328 [2024-10-01 01:28:12.024585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.328 [2024-10-01 01:28:12.024612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.032607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.032633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.040618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.040640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.048634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.048655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.056652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.056672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.064680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.064703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.072701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.072721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.080723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.080744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.088746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.088766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.096768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.096787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.104793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.104814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.112814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.112836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.120835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.120855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.128858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.128878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.136881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.136901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.144902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.144922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.152926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.152947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.161717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.161744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 [2024-10-01 01:28:12.168979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.169024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.329 Running I/O for 5 seconds... 00:10:32.329 [2024-10-01 01:28:12.177035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.329 [2024-10-01 01:28:12.177057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.191758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.191788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.202312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.202340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.212790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.212819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.224150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.224180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.237079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.237106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.247634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.247665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.259460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.259491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.270959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.270986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.282791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.282818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.294336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.294368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.307917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.307945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.318821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.318858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.330451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.330478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.344237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.344265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.354948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.354975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.366381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.366412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.377948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.377976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.389339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.389370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.402187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.402216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.587 [2024-10-01 01:28:12.412425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.587 [2024-10-01 01:28:12.412452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.588 [2024-10-01 01:28:12.423521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.588 [2024-10-01 01:28:12.423548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.588 [2024-10-01 01:28:12.436420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.588 [2024-10-01 01:28:12.436451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.446467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.446494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.458481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.458508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.469562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.469590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.480671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.480698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.493662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.493689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.504141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.504169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.514896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.514927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.525816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.525847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.536622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.536650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.547642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.547670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.558511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.558539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.569388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.569415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.580799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.580826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.591735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.591762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.602927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.602958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.613965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.613994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.624235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.624263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.635053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.635080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.846 [2024-10-01 01:28:12.647937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.846 [2024-10-01 01:28:12.647964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.847 [2024-10-01 01:28:12.657950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.847 [2024-10-01 01:28:12.657977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.847 [2024-10-01 01:28:12.669789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.847 [2024-10-01 01:28:12.669817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.847 [2024-10-01 01:28:12.680727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.847 [2024-10-01 01:28:12.680754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.847 [2024-10-01 01:28:12.691391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.847 [2024-10-01 01:28:12.691419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.104 [2024-10-01 01:28:12.702099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.104 [2024-10-01 01:28:12.702127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.104 [2024-10-01 01:28:12.713211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.104 [2024-10-01 01:28:12.713238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.104 [2024-10-01 01:28:12.724534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.104 [2024-10-01 01:28:12.724560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.104 [2024-10-01 01:28:12.735783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.104 [2024-10-01 01:28:12.735810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.746814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.746849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.757797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.757840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.769421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.769447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.780960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.780987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.791861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.791887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.802773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.802800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.815857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.815885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.825681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.825708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.836721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.836748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.849770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.849796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.858965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.859015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.870954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.870981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.881715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.881742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.892599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.892626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.903581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.903607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.914423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.914451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.927697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.927725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.937424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.937451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.105 [2024-10-01 01:28:12.949220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.105 [2024-10-01 01:28:12.949247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:12.960042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:12.960088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:12.971369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:12.971396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:12.984520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:12.984546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:12.995105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:12.995132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.006155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.006183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.019052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.019080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.029073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.029101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.039816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.039843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.051305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.051331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.062106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.062134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.075265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.075306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.087538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.087565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.096846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.096876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.109159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.109187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.122216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.122243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.132887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.132919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.143860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.143887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.156229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.156256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.166193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.166220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.177316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.177351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 11398.00 IOPS, 89.05 MiB/s [2024-10-01 01:28:13.188135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.188163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.199055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.199082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.363 [2024-10-01 01:28:13.209810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.363 [2024-10-01 01:28:13.209837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.221101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.221128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.231506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.231533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.242374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.242401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.255251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.255280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.265185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.265213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.276359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.276385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.286954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.286982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.297046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.297073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.307598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.307626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.317988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.318023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.328488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.328515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.338949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.338976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.349622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.349650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.360165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.360192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.373040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.373068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.385114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.385142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.394101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.394129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.405281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.405309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.416098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.416127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.426993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.427030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.439452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.439480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.449705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.449733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.460219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.460247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.622 [2024-10-01 01:28:13.470882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.622 [2024-10-01 01:28:13.470910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.881 [2024-10-01 01:28:13.483584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.881 [2024-10-01 01:28:13.483612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.881 [2024-10-01 01:28:13.493823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.881 [2024-10-01 01:28:13.493851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.881 [2024-10-01 01:28:13.504346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.881 [2024-10-01 01:28:13.504374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.881 [2024-10-01 01:28:13.514951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.881 [2024-10-01 01:28:13.514978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.526153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.526182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.537429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.537463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.548672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.548698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.560563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.560594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.571740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.571767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.583440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.583470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.595265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.595296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.606992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.607046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.618615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.618642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.630119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.630147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.641804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.641831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.652840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.652867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.663950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.663977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.675695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.675722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.687304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.687334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.698720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.698747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.710305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.710333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.721605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.721631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.882 [2024-10-01 01:28:13.732412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.882 [2024-10-01 01:28:13.732454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.744059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.744087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.757387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.757418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.767781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.767807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.778991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.779042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.791686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.791717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.803575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.803606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.813395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.813421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.825706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.825732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.837212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.837244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.849089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.849120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.860738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.860765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.872619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.872649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.883866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.883893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.896934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.896960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.907805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.907833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.918924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.918951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.929807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.929835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.941269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.941315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.952990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.953041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.964325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.964353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.977866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.977893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.140 [2024-10-01 01:28:13.988942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.140 [2024-10-01 01:28:13.988969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.001581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.001608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.012987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.013022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.026187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.026223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.036827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.036854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.048751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.048778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.060836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.060862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.072723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.072749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.086746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.086773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.098125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.098157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.109405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.109431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.122479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.122509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.133188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.133219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.144657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.399 [2024-10-01 01:28:14.144683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.399 [2024-10-01 01:28:14.156326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.156353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.169460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.169503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.180548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.180579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 11390.50 IOPS, 88.99 MiB/s [2024-10-01 01:28:14.192338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.192366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.204209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.204239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.215520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.215563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.227196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.227227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.238210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.238237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.400 [2024-10-01 01:28:14.249804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.400 [2024-10-01 01:28:14.249854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.260994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.261030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.271965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.271993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.283511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.283543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.294578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.294609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.306222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.306253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.317688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.317716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.329009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.329036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.340415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.340446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.351967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.351995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.363441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.363469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.376082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.376118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.386743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.386774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.397883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.397909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.411295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.411336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.422041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.422069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.433298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.433325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.444101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.444129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.455267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.455297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.466674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.466711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.658 [2024-10-01 01:28:14.477724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.658 [2024-10-01 01:28:14.477750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-10-01 01:28:14.488630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-10-01 01:28:14.488656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-10-01 01:28:14.499799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-10-01 01:28:14.499825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.659 [2024-10-01 01:28:14.511367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.659 [2024-10-01 01:28:14.511397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.522448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.522476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.533705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.533732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.545023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.545052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.556066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.556113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.567330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.567361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.580666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.580693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.591546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.591574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.603433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.603467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.614183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.614212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.625510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.625545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.637217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.637248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.648674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.648702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.662027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.662055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.672873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.672899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.684272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.684316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.697162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.697193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.707357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.707389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.719320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.719347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.730524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.730555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.741735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.741761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.752690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.752717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.918 [2024-10-01 01:28:14.765679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.918 [2024-10-01 01:28:14.765707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.178 [2024-10-01 01:28:14.775583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.178 [2024-10-01 01:28:14.775610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.178 [2024-10-01 01:28:14.787500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.178 [2024-10-01 01:28:14.787531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.178 [2024-10-01 01:28:14.798902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.178 [2024-10-01 01:28:14.798929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.178 [2024-10-01 01:28:14.809656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.178 [2024-10-01 01:28:14.809687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.178 [2024-10-01 01:28:14.821389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.178 [2024-10-01 01:28:14.821420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.178 [2024-10-01 01:28:14.832372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.178 [2024-10-01 01:28:14.832402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.843510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.843537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.854611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.854638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.865729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.865755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.876873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.876900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.889703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.889730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.899501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.899527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.911487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.911513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.922473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.922500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.933704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.933731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.946413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.946444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.957612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.957643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.969236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.969263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.980618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.980644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:14.992218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:14.992250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:15.003264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:15.003292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:15.016043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:15.016071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.179 [2024-10-01 01:28:15.026024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.179 [2024-10-01 01:28:15.026052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.437 [2024-10-01 01:28:15.037182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.437 [2024-10-01 01:28:15.037213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.437 [2024-10-01 01:28:15.048645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.437 [2024-10-01 01:28:15.048686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.437 [2024-10-01 01:28:15.059988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.437 [2024-10-01 01:28:15.060040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.437 [2024-10-01 01:28:15.073342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.437 [2024-10-01 01:28:15.073370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.437 [2024-10-01 01:28:15.084076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.437 [2024-10-01 01:28:15.084107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.437 [2024-10-01 01:28:15.095801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.437 [2024-10-01 01:28:15.095829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.107207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.107237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.118697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.118723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.129963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.129990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.141480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.141507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.154846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.154873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.165708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.165734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.176824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.176851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 11345.00 IOPS, 88.63 MiB/s [2024-10-01 01:28:15.190287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.190314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.201029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.201057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.212402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.212432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.225811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.225838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.236272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.236303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.247460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.247488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.259174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.259201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.270152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.270182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.438 [2024-10-01 01:28:15.281434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.438 [2024-10-01 01:28:15.281461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.292552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.292582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.304015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.304058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.315201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.315231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.326485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.326522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.337863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.337890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.349708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.349735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.361179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.361207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.374198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.374230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.384636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.384663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.396627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.396654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.407939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.407966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.419066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.419094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.429770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.429797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.441443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.441473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.453277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.453306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.464655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.464682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.475711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.475738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.487059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.487090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.498069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.498097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.510247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.510277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.521828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.521855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.533392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.533422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.696 [2024-10-01 01:28:15.546728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.696 [2024-10-01 01:28:15.546786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.557236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.557263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.568221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.568252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.579439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.579467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.590704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.590731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.601585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.601611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.612842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.612869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.624533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.624564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.636119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.636149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.647819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.647846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.658868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.658895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.672108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.672140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.682713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.682744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.694711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.694740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.706386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.706417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.717859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.717886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.729182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.729213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.741103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.741135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.752546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.752577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.765804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.765841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.776800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.776842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.788380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.788407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.954 [2024-10-01 01:28:15.799690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.954 [2024-10-01 01:28:15.799732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.811104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.811135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.822693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.822734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.833746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.833772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.844810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.844836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.856166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.856197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.867336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.867367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.880780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.880808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.891668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.891694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.903067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.903112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.914573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.914600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.925724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.925750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.939117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.939144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.949695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.949722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.960481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.960509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.971797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.971824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.985198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.985239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:15.996281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:15.996321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:16.007845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:16.007872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:16.019558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:16.019585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:16.032827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:16.032854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:16.043827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:16.043853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.212 [2024-10-01 01:28:16.055150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.212 [2024-10-01 01:28:16.055181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.066537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.066567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.078035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.078079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.089324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.089354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.100607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.100634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.111543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.111570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.123453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.123484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.134688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.134715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.145662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.145688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.156697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.156723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.168064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.168109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.179517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.179544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 11306.25 IOPS, 88.33 MiB/s [2024-10-01 01:28:16.192898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.192925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.203678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.203705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.215071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.215101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.228183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.228213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.238987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.239038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.250038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.250083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.261063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.261093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.272418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.272448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.283490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.283520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.294738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.294765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.308116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.308148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.471 [2024-10-01 01:28:16.318838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.471 [2024-10-01 01:28:16.318864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.330116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.330147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.343195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.343223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.353844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.353870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.365846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.365873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.376876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.376903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.388767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.388793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.401122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.401153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.412495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.412522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.423615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.423642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.437096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.437141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.447727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.447757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.459686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.459713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.471187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.471217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.484691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.484719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.495367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.495398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.506857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.506884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.518290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.518334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.530007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.530034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.541173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.541216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.552728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.552755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.563636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.563663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.729 [2024-10-01 01:28:16.576553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.729 [2024-10-01 01:28:16.576580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.587064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.587095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.598099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.598126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.609313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.609342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.620458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.620488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.631523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.631554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.643184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.643212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.654620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.654663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.665501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.665531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.678838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.678865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.689043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.689071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.701056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.701099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.712271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.712302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.725530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.725557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.736575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.736606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.747792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.747819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.761236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.761267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.772118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.772161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.783179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.783209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.796191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.796223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.806477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.806508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.818698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.818727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.987 [2024-10-01 01:28:16.830100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.987 [2024-10-01 01:28:16.830131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.841158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.841185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.852252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.852294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.865465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.865493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.875240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.875271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.887169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.887200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.898375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.898410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.910129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.910162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.921453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.921480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.932582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.932613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.943878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.943905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.955814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.955840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.244 [2024-10-01 01:28:16.967142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.244 [2024-10-01 01:28:16.967169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:16.978425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:16.978453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:16.989907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:16.989934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.001311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.001338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.012855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.012882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.026236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.026268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.036994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.037030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.048230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.048261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.066539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.066573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.077175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.077209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.245 [2024-10-01 01:28:17.087763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.245 [2024-10-01 01:28:17.087791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.100180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.100207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.109942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.109969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.120559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.120586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.131245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.131272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.144430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.144457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.154756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.154784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.164892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.164919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.175300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.175328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.185729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.185756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 11310.80 IOPS, 88.37 MiB/s [2024-10-01 01:28:17.195418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.195445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 00:10:37.503 Latency(us) 00:10:37.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.503 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:37.503 Nvme1n1 : 5.01 11315.76 88.40 0.00 0.00 11297.85 4029.25 23204.60 00:10:37.503 =================================================================================================================== 00:10:37.503 Total : 11315.76 88.40 0.00 0.00 11297.85 4029.25 23204.60 00:10:37.503 [2024-10-01 01:28:17.202378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.202401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.210394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.210418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.218439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.218481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.226466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.226513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.234484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.234531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.242507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.242552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.250527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.250574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.258552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.258598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.266573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.266619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.274601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.274647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.282615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.282662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.290640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.290690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.298656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.298705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.306681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.306733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.314697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.314744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.322715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.322762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.330738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.503 [2024-10-01 01:28:17.330786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.503 [2024-10-01 01:28:17.338761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.504 [2024-10-01 01:28:17.338807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.504 [2024-10-01 01:28:17.346757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.504 [2024-10-01 01:28:17.346783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.504 [2024-10-01 01:28:17.354787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.504 [2024-10-01 01:28:17.354822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.362831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.362877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.370853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.370905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.378855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.378890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.386855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.386879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.394924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.394973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.402945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.403005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.410949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.410987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.418951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.418976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.426971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.427004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 [2024-10-01 01:28:17.434992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.762 [2024-10-01 01:28:17.435038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (811364) - No such process 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 811364 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.762 delay0 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.762 01:28:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:37.762 [2024-10-01 01:28:17.513083] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.321 Initializing NVMe Controllers 00:10:44.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.321 Initialization complete. Launching workers. 00:10:44.321 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 123 00:10:44.321 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 410, failed to submit 33 00:10:44.321 success 243, unsuccessful 167, failed 0 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.321 rmmod nvme_tcp 00:10:44.321 rmmod nvme_fabrics 00:10:44.321 rmmod nvme_keyring 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 810015 ']' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 810015 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 810015 ']' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 810015 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 810015 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 810015' 00:10:44.321 killing process with pid 810015 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 810015 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 810015 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.321 01:28:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.222 00:10:46.222 real 0m27.941s 00:10:46.222 user 0m40.917s 00:10:46.222 sys 0m8.238s 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.222 ************************************ 00:10:46.222 END TEST nvmf_zcopy 00:10:46.222 ************************************ 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.222 01:28:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.479 ************************************ 00:10:46.479 START TEST nvmf_nmic 00:10:46.479 ************************************ 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:46.479 * Looking for test storage... 00:10:46.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.479 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.480 --rc genhtml_branch_coverage=1 00:10:46.480 --rc genhtml_function_coverage=1 00:10:46.480 --rc genhtml_legend=1 00:10:46.480 --rc geninfo_all_blocks=1 00:10:46.480 --rc geninfo_unexecuted_blocks=1 00:10:46.480 00:10:46.480 ' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.480 --rc genhtml_branch_coverage=1 00:10:46.480 --rc genhtml_function_coverage=1 00:10:46.480 --rc genhtml_legend=1 00:10:46.480 --rc geninfo_all_blocks=1 00:10:46.480 --rc geninfo_unexecuted_blocks=1 00:10:46.480 00:10:46.480 ' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.480 --rc genhtml_branch_coverage=1 00:10:46.480 --rc genhtml_function_coverage=1 00:10:46.480 --rc genhtml_legend=1 00:10:46.480 --rc geninfo_all_blocks=1 00:10:46.480 --rc geninfo_unexecuted_blocks=1 00:10:46.480 00:10:46.480 ' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.480 --rc genhtml_branch_coverage=1 00:10:46.480 --rc genhtml_function_coverage=1 00:10:46.480 --rc genhtml_legend=1 00:10:46.480 --rc geninfo_all_blocks=1 00:10:46.480 --rc geninfo_unexecuted_blocks=1 00:10:46.480 00:10:46.480 ' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.480 01:28:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:49.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:49.008 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:49.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:49.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:49.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:10:49.009 00:10:49.009 --- 10.0.0.2 ping statistics --- 00:10:49.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.009 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:49.009 00:10:49.009 --- 10.0.0.1 ping statistics --- 00:10:49.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.009 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=814764 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 814764 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 814764 ']' 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.009 [2024-10-01 01:28:28.547177] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:49.009 [2024-10-01 01:28:28.547256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.009 [2024-10-01 01:28:28.613100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.009 [2024-10-01 01:28:28.700646] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.009 [2024-10-01 01:28:28.700703] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.009 [2024-10-01 01:28:28.700717] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.009 [2024-10-01 01:28:28.700728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.009 [2024-10-01 01:28:28.700738] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.009 [2024-10-01 01:28:28.700880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.009 [2024-10-01 01:28:28.700945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.009 [2024-10-01 01:28:28.701039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.009 [2024-10-01 01:28:28.701044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.009 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.010 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.010 [2024-10-01 01:28:28.860839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.268 Malloc0 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.268 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.269 [2024-10-01 01:28:28.914637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:49.269 test case1: single bdev can't be used in multiple subsystems 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.269 [2024-10-01 01:28:28.938426] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:49.269 [2024-10-01 01:28:28.938456] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:49.269 [2024-10-01 01:28:28.938486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.269 request: 00:10:49.269 { 00:10:49.269 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:49.269 "namespace": { 00:10:49.269 "bdev_name": "Malloc0", 00:10:49.269 "no_auto_visible": false 00:10:49.269 }, 00:10:49.269 "method": "nvmf_subsystem_add_ns", 00:10:49.269 "req_id": 1 00:10:49.269 } 00:10:49.269 Got JSON-RPC error response 00:10:49.269 response: 00:10:49.269 { 00:10:49.269 "code": -32602, 00:10:49.269 "message": "Invalid parameters" 00:10:49.269 } 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:49.269 Adding namespace failed - expected result. 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:49.269 test case2: host connect to nvmf target in multiple paths 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.269 [2024-10-01 01:28:28.946548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.269 01:28:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.832 01:28:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:50.764 01:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.764 01:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:50.764 01:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.764 01:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:50.764 01:28:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:52.661 01:28:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:52.661 [global] 00:10:52.661 thread=1 00:10:52.661 invalidate=1 00:10:52.661 rw=write 00:10:52.661 time_based=1 00:10:52.661 runtime=1 00:10:52.661 ioengine=libaio 00:10:52.661 direct=1 00:10:52.661 bs=4096 00:10:52.661 iodepth=1 00:10:52.661 norandommap=0 00:10:52.661 numjobs=1 00:10:52.661 00:10:52.661 verify_dump=1 00:10:52.661 verify_backlog=512 00:10:52.661 verify_state_save=0 00:10:52.661 do_verify=1 00:10:52.661 verify=crc32c-intel 00:10:52.661 [job0] 00:10:52.661 filename=/dev/nvme0n1 00:10:52.661 Could not set queue depth (nvme0n1) 00:10:52.919 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.919 fio-3.35 00:10:52.919 Starting 1 thread 00:10:54.289 00:10:54.289 job0: (groupid=0, jobs=1): err= 0: pid=815288: Tue Oct 1 01:28:33 2024 00:10:54.289 read: IOPS=996, BW=3985KiB/s (4080kB/s)(4124KiB/1035msec) 00:10:54.290 slat (nsec): min=6866, max=44826, avg=11386.93, stdev=4743.10 00:10:54.290 clat (usec): min=268, max=41128, avg=614.17, stdev=3339.77 00:10:54.290 lat (usec): min=276, max=41148, avg=625.56, stdev=3340.62 00:10:54.290 clat percentiles (usec): 00:10:54.290 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:10:54.290 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:10:54.290 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 416], 00:10:54.290 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.290 | 99.99th=[41157] 00:10:54.290 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:10:54.290 slat (nsec): min=8487, max=72051, avg=22080.74, stdev=6139.66 00:10:54.290 clat (usec): min=164, max=2550, avg=223.41, stdev=64.11 00:10:54.290 lat (usec): min=173, max=2571, avg=245.49, stdev=65.25 00:10:54.290 clat percentiles (usec): 00:10:54.290 | 1.00th=[ 174], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:10:54.290 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:10:54.290 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 265], 00:10:54.290 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 2540], 00:10:54.290 | 99.99th=[ 2540] 00:10:54.290 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:10:54.290 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:54.290 lat (usec) : 250=54.03%, 500=45.23%, 750=0.43% 00:10:54.290 lat (msec) : 4=0.04%, 50=0.27% 00:10:54.290 cpu : usr=2.80%, sys=6.38%, ctx=2568, majf=0, minf=1 00:10:54.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.290 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.290 00:10:54.290 Run status group 0 (all jobs): 00:10:54.290 READ: bw=3985KiB/s (4080kB/s), 3985KiB/s-3985KiB/s (4080kB/s-4080kB/s), io=4124KiB (4223kB), run=1035-1035msec 00:10:54.290 WRITE: bw=5936KiB/s (6079kB/s), 5936KiB/s-5936KiB/s (6079kB/s-6079kB/s), io=6144KiB (6291kB), run=1035-1035msec 00:10:54.290 00:10:54.290 Disk stats (read/write): 00:10:54.290 nvme0n1: ios=1077/1536, merge=0/0, ticks=786/324, in_queue=1110, util=99.70% 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.290 rmmod nvme_tcp 00:10:54.290 rmmod nvme_fabrics 00:10:54.290 rmmod nvme_keyring 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 814764 ']' 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 814764 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 814764 ']' 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 814764 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814764 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814764' 00:10:54.290 killing process with pid 814764 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 814764 00:10:54.290 01:28:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 814764 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.549 01:28:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.451 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.451 00:10:56.451 real 0m10.207s 00:10:56.451 user 0m23.030s 00:10:56.451 sys 0m2.516s 00:10:56.451 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.451 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.451 ************************************ 00:10:56.451 END TEST nvmf_nmic 00:10:56.451 ************************************ 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.710 ************************************ 00:10:56.710 START TEST nvmf_fio_target 00:10:56.710 ************************************ 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.710 * Looking for test storage... 00:10:56.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:56.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.710 --rc genhtml_branch_coverage=1 00:10:56.710 --rc genhtml_function_coverage=1 00:10:56.710 --rc genhtml_legend=1 00:10:56.710 --rc geninfo_all_blocks=1 00:10:56.710 --rc geninfo_unexecuted_blocks=1 00:10:56.710 00:10:56.710 ' 00:10:56.710 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:56.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.710 --rc genhtml_branch_coverage=1 00:10:56.710 --rc genhtml_function_coverage=1 00:10:56.710 --rc genhtml_legend=1 00:10:56.710 --rc geninfo_all_blocks=1 00:10:56.710 --rc geninfo_unexecuted_blocks=1 00:10:56.710 00:10:56.710 ' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.711 --rc genhtml_branch_coverage=1 00:10:56.711 --rc genhtml_function_coverage=1 00:10:56.711 --rc genhtml_legend=1 00:10:56.711 --rc geninfo_all_blocks=1 00:10:56.711 --rc geninfo_unexecuted_blocks=1 00:10:56.711 00:10:56.711 ' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.711 --rc genhtml_branch_coverage=1 00:10:56.711 --rc genhtml_function_coverage=1 00:10:56.711 --rc genhtml_legend=1 00:10:56.711 --rc geninfo_all_blocks=1 00:10:56.711 --rc geninfo_unexecuted_blocks=1 00:10:56.711 00:10:56.711 ' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.711 01:28:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:58.612 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.613 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:58.871 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:58.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:58.871 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:58.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:58.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:10:58.872 00:10:58.872 --- 10.0.0.2 ping statistics --- 00:10:58.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.872 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:10:58.872 00:10:58.872 --- 10.0.0.1 ping statistics --- 00:10:58.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.872 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=817489 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 817489 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 817489 ']' 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.872 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.872 [2024-10-01 01:28:38.697587] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:58.872 [2024-10-01 01:28:38.697685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.130 [2024-10-01 01:28:38.763995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.130 [2024-10-01 01:28:38.848479] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.130 [2024-10-01 01:28:38.848546] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.130 [2024-10-01 01:28:38.848560] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.130 [2024-10-01 01:28:38.848571] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.130 [2024-10-01 01:28:38.848595] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.130 [2024-10-01 01:28:38.848685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.130 [2024-10-01 01:28:38.848751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.130 [2024-10-01 01:28:38.848782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.130 [2024-10-01 01:28:38.848785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.130 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.130 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:59.130 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:59.130 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.130 01:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.386 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.386 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:59.643 [2024-10-01 01:28:39.247676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.643 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.899 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:59.899 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.156 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:00.156 01:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.413 01:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:00.413 01:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.005 01:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:01.005 01:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:01.005 01:28:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.286 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:01.286 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.852 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:01.852 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.852 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:01.852 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:02.416 01:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.416 01:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:02.416 01:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.673 01:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:02.673 01:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.930 01:28:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.187 [2024-10-01 01:28:43.027860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.444 01:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:03.701 01:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:03.958 01:28:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.521 01:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:04.521 01:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.521 01:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.521 01:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:04.521 01:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:04.521 01:28:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:07.046 01:28:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:07.046 [global] 00:11:07.046 thread=1 00:11:07.046 invalidate=1 00:11:07.046 rw=write 00:11:07.046 time_based=1 00:11:07.046 runtime=1 00:11:07.046 ioengine=libaio 00:11:07.046 direct=1 00:11:07.046 bs=4096 00:11:07.046 iodepth=1 00:11:07.046 norandommap=0 00:11:07.046 numjobs=1 00:11:07.046 00:11:07.046 verify_dump=1 00:11:07.046 verify_backlog=512 00:11:07.046 verify_state_save=0 00:11:07.046 do_verify=1 00:11:07.046 verify=crc32c-intel 00:11:07.046 [job0] 00:11:07.046 filename=/dev/nvme0n1 00:11:07.046 [job1] 00:11:07.046 filename=/dev/nvme0n2 00:11:07.046 [job2] 00:11:07.046 filename=/dev/nvme0n3 00:11:07.046 [job3] 00:11:07.046 filename=/dev/nvme0n4 00:11:07.046 Could not set queue depth (nvme0n1) 00:11:07.046 Could not set queue depth (nvme0n2) 00:11:07.046 Could not set queue depth (nvme0n3) 00:11:07.046 Could not set queue depth (nvme0n4) 00:11:07.046 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.046 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.046 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.046 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.046 fio-3.35 00:11:07.046 Starting 4 threads 00:11:07.978 00:11:07.978 job0: (groupid=0, jobs=1): err= 0: pid=818572: Tue Oct 1 01:28:47 2024 00:11:07.978 read: IOPS=1717, BW=6869KiB/s (7034kB/s)(6876KiB/1001msec) 00:11:07.978 slat (nsec): min=5020, max=57534, avg=15447.49, stdev=7589.81 00:11:07.978 clat (usec): min=222, max=1909, avg=290.81, stdev=51.51 00:11:07.978 lat (usec): min=230, max=1927, avg=306.26, stdev=52.61 00:11:07.978 clat percentiles (usec): 00:11:07.978 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:11:07.978 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:11:07.978 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 367], 00:11:07.978 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 461], 99.95th=[ 1909], 00:11:07.978 | 99.99th=[ 1909] 00:11:07.978 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:07.978 slat (nsec): min=6382, max=58730, avg=15134.03, stdev=7494.88 00:11:07.978 clat (usec): min=158, max=2219, avg=208.12, stdev=51.47 00:11:07.978 lat (usec): min=168, max=2236, avg=223.25, stdev=53.29 00:11:07.978 clat percentiles (usec): 00:11:07.978 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:11:07.978 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:11:07.978 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 245], 00:11:07.978 | 99.00th=[ 289], 99.50th=[ 359], 99.90th=[ 429], 99.95th=[ 453], 00:11:07.978 | 99.99th=[ 2212] 00:11:07.978 bw ( KiB/s): min= 8192, max= 8192, per=33.86%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.978 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.978 lat (usec) : 250=54.79%, 500=45.16% 00:11:07.978 lat (msec) : 2=0.03%, 4=0.03% 00:11:07.978 cpu : usr=3.70%, sys=6.80%, ctx=3767, majf=0, minf=1 00:11:07.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.978 issued rwts: total=1719,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.978 job1: (groupid=0, jobs=1): err= 0: pid=818574: Tue Oct 1 01:28:47 2024 00:11:07.978 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:07.978 slat (nsec): min=4774, max=70834, avg=17695.80, stdev=10265.99 00:11:07.978 clat (usec): min=261, max=557, avg=343.52, stdev=47.72 00:11:07.978 lat (usec): min=268, max=577, avg=361.21, stdev=52.91 00:11:07.978 clat percentiles (usec): 00:11:07.978 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 302], 00:11:07.978 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 343], 00:11:07.978 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 433], 00:11:07.978 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 553], 99.95th=[ 562], 00:11:07.978 | 99.99th=[ 562] 00:11:07.978 write: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec); 0 zone resets 00:11:07.978 slat (nsec): min=5771, max=59679, avg=13432.81, stdev=6107.41 00:11:07.978 clat (usec): min=161, max=383, avg=206.00, stdev=24.43 00:11:07.978 lat (usec): min=168, max=409, avg=219.44, stdev=26.28 00:11:07.978 clat percentiles (usec): 00:11:07.978 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:07.978 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:11:07.978 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 249], 00:11:07.978 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 379], 99.95th=[ 383], 00:11:07.978 | 99.99th=[ 383] 00:11:07.978 bw ( KiB/s): min= 8192, max= 8192, per=33.86%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.978 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.978 lat (usec) : 250=53.55%, 500=46.37%, 750=0.09% 00:11:07.978 cpu : usr=2.90%, sys=5.60%, ctx=3494, majf=0, minf=1 00:11:07.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.978 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.978 job2: (groupid=0, jobs=1): err= 0: pid=818575: Tue Oct 1 01:28:47 2024 00:11:07.978 read: IOPS=397, BW=1590KiB/s (1629kB/s)(1592KiB/1001msec) 00:11:07.978 slat (nsec): min=5086, max=37724, avg=12694.31, stdev=6194.16 00:11:07.978 clat (usec): min=253, max=41964, avg=2001.69, stdev=8014.22 00:11:07.978 lat (usec): min=265, max=41999, avg=2014.39, stdev=8016.99 00:11:07.978 clat percentiles (usec): 00:11:07.978 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:11:07.978 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 388], 00:11:07.978 | 70.00th=[ 408], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 676], 00:11:07.978 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:07.978 | 99.99th=[42206] 00:11:07.978 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:07.978 slat (nsec): min=7914, max=66912, avg=25977.41, stdev=11713.22 00:11:07.978 clat (usec): min=187, max=559, avg=352.59, stdev=90.44 00:11:07.978 lat (usec): min=228, max=581, avg=378.57, stdev=85.61 00:11:07.978 clat percentiles (usec): 00:11:07.978 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 231], 20.00th=[ 258], 00:11:07.978 | 30.00th=[ 281], 40.00th=[ 314], 50.00th=[ 363], 60.00th=[ 400], 00:11:07.978 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 486], 00:11:07.978 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 562], 99.95th=[ 562], 00:11:07.978 | 99.99th=[ 562] 00:11:07.978 bw ( KiB/s): min= 4096, max= 4096, per=16.93%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.978 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.978 lat (usec) : 250=9.34%, 500=86.04%, 750=2.64%, 1000=0.22% 00:11:07.978 lat (msec) : 50=1.76% 00:11:07.978 cpu : usr=0.80%, sys=2.00%, ctx=911, majf=0, minf=1 00:11:07.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.978 issued rwts: total=398,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.978 job3: (groupid=0, jobs=1): err= 0: pid=818576: Tue Oct 1 01:28:47 2024 00:11:07.978 read: IOPS=1030, BW=4124KiB/s (4223kB/s)(4128KiB/1001msec) 00:11:07.978 slat (nsec): min=6603, max=52091, avg=16029.99, stdev=6666.62 00:11:07.978 clat (usec): min=276, max=653, avg=396.52, stdev=79.82 00:11:07.978 lat (usec): min=286, max=661, avg=412.55, stdev=80.22 00:11:07.978 clat percentiles (usec): 00:11:07.979 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:11:07.979 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 392], 00:11:07.979 | 70.00th=[ 449], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[ 545], 00:11:07.979 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 619], 99.95th=[ 652], 00:11:07.979 | 99.99th=[ 652] 00:11:07.979 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:07.979 slat (nsec): min=8348, max=65019, avg=23526.08, stdev=10911.35 00:11:07.979 clat (usec): min=240, max=765, avg=341.50, stdev=54.38 00:11:07.979 lat (usec): min=248, max=791, avg=365.03, stdev=55.50 00:11:07.979 clat percentiles (usec): 00:11:07.979 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:11:07.979 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:11:07.979 | 70.00th=[ 359], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:11:07.979 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 515], 99.95th=[ 766], 00:11:07.979 | 99.99th=[ 766] 00:11:07.979 bw ( KiB/s): min= 5616, max= 5616, per=23.21%, avg=5616.00, stdev= 0.00, samples=1 00:11:07.979 iops : min= 1404, max= 1404, avg=1404.00, stdev= 0.00, samples=1 00:11:07.979 lat (usec) : 250=0.19%, 500=94.12%, 750=5.65%, 1000=0.04% 00:11:07.979 cpu : usr=3.60%, sys=6.90%, ctx=2569, majf=0, minf=1 00:11:07.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.979 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.979 00:11:07.979 Run status group 0 (all jobs): 00:11:07.979 READ: bw=18.3MiB/s (19.2MB/s), 1590KiB/s-6869KiB/s (1629kB/s-7034kB/s), io=18.3MiB (19.2MB), run=1001-1001msec 00:11:07.979 WRITE: bw=23.6MiB/s (24.8MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=23.6MiB (24.8MB), run=1001-1001msec 00:11:07.979 00:11:07.979 Disk stats (read/write): 00:11:07.979 nvme0n1: ios=1586/1607, merge=0/0, ticks=542/325, in_queue=867, util=91.08% 00:11:07.979 nvme0n2: ios=1381/1536, merge=0/0, ticks=469/308, in_queue=777, util=87.28% 00:11:07.979 nvme0n3: ios=39/512, merge=0/0, ticks=1580/178, in_queue=1758, util=98.22% 00:11:07.979 nvme0n4: ios=1024/1115, merge=0/0, ticks=382/355, in_queue=737, util=89.67% 00:11:07.979 01:28:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:07.979 [global] 00:11:07.979 thread=1 00:11:07.979 invalidate=1 00:11:07.979 rw=randwrite 00:11:07.979 time_based=1 00:11:07.979 runtime=1 00:11:07.979 ioengine=libaio 00:11:07.979 direct=1 00:11:07.979 bs=4096 00:11:07.979 iodepth=1 00:11:07.979 norandommap=0 00:11:07.979 numjobs=1 00:11:07.979 00:11:07.979 verify_dump=1 00:11:07.979 verify_backlog=512 00:11:07.979 verify_state_save=0 00:11:07.979 do_verify=1 00:11:07.979 verify=crc32c-intel 00:11:07.979 [job0] 00:11:07.979 filename=/dev/nvme0n1 00:11:07.979 [job1] 00:11:07.979 filename=/dev/nvme0n2 00:11:07.979 [job2] 00:11:07.979 filename=/dev/nvme0n3 00:11:07.979 [job3] 00:11:07.979 filename=/dev/nvme0n4 00:11:07.979 Could not set queue depth (nvme0n1) 00:11:07.979 Could not set queue depth (nvme0n2) 00:11:07.979 Could not set queue depth (nvme0n3) 00:11:07.979 Could not set queue depth (nvme0n4) 00:11:08.236 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.236 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.236 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.236 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.236 fio-3.35 00:11:08.236 Starting 4 threads 00:11:09.607 00:11:09.607 job0: (groupid=0, jobs=1): err= 0: pid=818802: Tue Oct 1 01:28:49 2024 00:11:09.607 read: IOPS=1151, BW=4607KiB/s (4718kB/s)(4612KiB/1001msec) 00:11:09.607 slat (nsec): min=6953, max=36566, avg=9947.64, stdev=3973.33 00:11:09.607 clat (usec): min=231, max=41270, avg=558.05, stdev=3395.17 00:11:09.607 lat (usec): min=240, max=41287, avg=568.00, stdev=3396.74 00:11:09.607 clat percentiles (usec): 00:11:09.607 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:11:09.607 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:11:09.607 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:11:09.607 | 99.00th=[ 429], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:09.607 | 99.99th=[41157] 00:11:09.607 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:09.607 slat (nsec): min=8879, max=58398, avg=14188.23, stdev=6561.63 00:11:09.607 clat (usec): min=160, max=613, avg=204.49, stdev=25.41 00:11:09.607 lat (usec): min=170, max=622, avg=218.68, stdev=28.90 00:11:09.607 clat percentiles (usec): 00:11:09.607 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:11:09.607 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:11:09.607 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 239], 00:11:09.607 | 99.00th=[ 258], 99.50th=[ 297], 99.90th=[ 453], 99.95th=[ 611], 00:11:09.607 | 99.99th=[ 611] 00:11:09.607 bw ( KiB/s): min= 4096, max= 4096, per=14.70%, avg=4096.00, stdev= 0.00, samples=1 00:11:09.607 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:09.607 lat (usec) : 250=63.07%, 500=36.48%, 750=0.11% 00:11:09.607 lat (msec) : 20=0.04%, 50=0.30% 00:11:09.607 cpu : usr=1.80%, sys=5.10%, ctx=2690, majf=0, minf=1 00:11:09.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.607 issued rwts: total=1153,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.608 job1: (groupid=0, jobs=1): err= 0: pid=818803: Tue Oct 1 01:28:49 2024 00:11:09.608 read: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec) 00:11:09.608 slat (nsec): min=6961, max=50290, avg=13390.89, stdev=5371.37 00:11:09.608 clat (usec): min=233, max=1492, avg=275.42, stdev=41.74 00:11:09.608 lat (usec): min=241, max=1500, avg=288.81, stdev=43.48 00:11:09.608 clat percentiles (usec): 00:11:09.608 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:11:09.608 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:11:09.608 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:11:09.608 | 99.00th=[ 338], 99.50th=[ 371], 99.90th=[ 1287], 99.95th=[ 1500], 00:11:09.608 | 99.99th=[ 1500] 00:11:09.608 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:09.608 slat (usec): min=9, max=118, avg=14.30, stdev= 6.81 00:11:09.608 clat (usec): min=160, max=1126, avg=196.52, stdev=33.93 00:11:09.608 lat (usec): min=169, max=1138, avg=210.81, stdev=36.26 00:11:09.608 clat percentiles (usec): 00:11:09.608 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:11:09.608 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:11:09.608 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 225], 00:11:09.608 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 709], 99.95th=[ 824], 00:11:09.608 | 99.99th=[ 1123] 00:11:09.608 bw ( KiB/s): min= 8192, max= 8192, per=29.41%, avg=8192.00, stdev= 0.00, samples=1 00:11:09.608 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:09.608 lat (usec) : 250=56.54%, 500=43.31%, 750=0.05%, 1000=0.03% 00:11:09.608 lat (msec) : 2=0.08% 00:11:09.608 cpu : usr=4.10%, sys=7.50%, ctx=3962, majf=0, minf=1 00:11:09.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.608 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.608 job2: (groupid=0, jobs=1): err= 0: pid=818804: Tue Oct 1 01:28:49 2024 00:11:09.608 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:09.608 slat (nsec): min=7276, max=50265, avg=12712.99, stdev=5131.93 00:11:09.608 clat (usec): min=302, max=1297, avg=350.12, stdev=46.52 00:11:09.608 lat (usec): min=312, max=1307, avg=362.83, stdev=47.50 00:11:09.608 clat percentiles (usec): 00:11:09.608 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:11:09.608 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:11:09.608 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 396], 00:11:09.608 | 99.00th=[ 453], 99.50th=[ 545], 99.90th=[ 1123], 99.95th=[ 1303], 00:11:09.608 | 99.99th=[ 1303] 00:11:09.608 write: IOPS=1688, BW=6753KiB/s (6915kB/s)(6760KiB/1001msec); 0 zone resets 00:11:09.608 slat (nsec): min=9219, max=63369, avg=16961.62, stdev=7725.12 00:11:09.608 clat (usec): min=181, max=2026, avg=237.03, stdev=52.28 00:11:09.608 lat (usec): min=199, max=2039, avg=254.00, stdev=53.31 00:11:09.608 clat percentiles (usec): 00:11:09.608 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:11:09.608 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:11:09.608 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 281], 00:11:09.608 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 807], 99.95th=[ 2024], 00:11:09.608 | 99.99th=[ 2024] 00:11:09.608 bw ( KiB/s): min= 8192, max= 8192, per=29.41%, avg=8192.00, stdev= 0.00, samples=1 00:11:09.608 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:09.608 lat (usec) : 250=42.16%, 500=57.44%, 750=0.25%, 1000=0.03% 00:11:09.608 lat (msec) : 2=0.09%, 4=0.03% 00:11:09.608 cpu : usr=4.20%, sys=5.90%, ctx=3227, majf=0, minf=1 00:11:09.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.608 issued rwts: total=1536,1690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.608 job3: (groupid=0, jobs=1): err= 0: pid=818805: Tue Oct 1 01:28:49 2024 00:11:09.608 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:09.608 slat (nsec): min=7356, max=37017, avg=12866.61, stdev=5427.67 00:11:09.608 clat (usec): min=278, max=519, avg=346.43, stdev=20.14 00:11:09.608 lat (usec): min=286, max=540, avg=359.30, stdev=22.68 00:11:09.608 clat percentiles (usec): 00:11:09.608 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:11:09.608 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:11:09.608 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 379], 00:11:09.608 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 478], 99.95th=[ 519], 00:11:09.608 | 99.99th=[ 519] 00:11:09.608 write: IOPS=1695, BW=6781KiB/s (6944kB/s)(6788KiB/1001msec); 0 zone resets 00:11:09.608 slat (nsec): min=8464, max=53181, avg=17009.06, stdev=7377.94 00:11:09.608 clat (usec): min=181, max=483, avg=238.83, stdev=31.56 00:11:09.608 lat (usec): min=191, max=524, avg=255.84, stdev=35.13 00:11:09.608 clat percentiles (usec): 00:11:09.608 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:11:09.608 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:11:09.608 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 277], 00:11:09.608 | 99.00th=[ 416], 99.50th=[ 445], 99.90th=[ 482], 99.95th=[ 486], 00:11:09.608 | 99.99th=[ 486] 00:11:09.608 bw ( KiB/s): min= 8192, max= 8192, per=29.41%, avg=8192.00, stdev= 0.00, samples=1 00:11:09.608 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:09.608 lat (usec) : 250=41.63%, 500=58.34%, 750=0.03% 00:11:09.608 cpu : usr=4.10%, sys=6.00%, ctx=3235, majf=0, minf=1 00:11:09.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:09.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.608 issued rwts: total=1536,1697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:09.608 00:11:09.608 Run status group 0 (all jobs): 00:11:09.608 READ: bw=23.9MiB/s (25.1MB/s), 4607KiB/s-7640KiB/s (4718kB/s-7824kB/s), io=24.0MiB (25.1MB), run=1001-1001msec 00:11:09.608 WRITE: bw=27.2MiB/s (28.5MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=27.2MiB (28.6MB), run=1001-1001msec 00:11:09.608 00:11:09.608 Disk stats (read/write): 00:11:09.608 nvme0n1: ios=1077/1053, merge=0/0, ticks=913/195, in_queue=1108, util=97.49% 00:11:09.608 nvme0n2: ios=1567/1846, merge=0/0, ticks=1439/344, in_queue=1783, util=100.00% 00:11:09.608 nvme0n3: ios=1288/1536, merge=0/0, ticks=1153/350, in_queue=1503, util=97.58% 00:11:09.608 nvme0n4: ios=1259/1536, merge=0/0, ticks=1326/345, in_queue=1671, util=97.46% 00:11:09.608 01:28:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:09.608 [global] 00:11:09.608 thread=1 00:11:09.608 invalidate=1 00:11:09.608 rw=write 00:11:09.608 time_based=1 00:11:09.608 runtime=1 00:11:09.608 ioengine=libaio 00:11:09.608 direct=1 00:11:09.608 bs=4096 00:11:09.608 iodepth=128 00:11:09.608 norandommap=0 00:11:09.608 numjobs=1 00:11:09.608 00:11:09.608 verify_dump=1 00:11:09.608 verify_backlog=512 00:11:09.608 verify_state_save=0 00:11:09.608 do_verify=1 00:11:09.608 verify=crc32c-intel 00:11:09.608 [job0] 00:11:09.608 filename=/dev/nvme0n1 00:11:09.608 [job1] 00:11:09.608 filename=/dev/nvme0n2 00:11:09.608 [job2] 00:11:09.608 filename=/dev/nvme0n3 00:11:09.608 [job3] 00:11:09.608 filename=/dev/nvme0n4 00:11:09.608 Could not set queue depth (nvme0n1) 00:11:09.608 Could not set queue depth (nvme0n2) 00:11:09.608 Could not set queue depth (nvme0n3) 00:11:09.608 Could not set queue depth (nvme0n4) 00:11:09.609 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.609 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.609 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.609 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.609 fio-3.35 00:11:09.609 Starting 4 threads 00:11:10.980 00:11:10.980 job0: (groupid=0, jobs=1): err= 0: pid=819035: Tue Oct 1 01:28:50 2024 00:11:10.980 read: IOPS=5359, BW=20.9MiB/s (22.0MB/s)(21.9MiB/1046msec) 00:11:10.980 slat (usec): min=2, max=7268, avg=86.92, stdev=485.18 00:11:10.980 clat (usec): min=2253, max=58074, avg=12382.63, stdev=6509.54 00:11:10.980 lat (usec): min=4521, max=58392, avg=12469.54, stdev=6514.46 00:11:10.980 clat percentiles (usec): 00:11:10.980 | 1.00th=[ 6456], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9765], 00:11:10.980 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11600], 60.00th=[11994], 00:11:10.980 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13829], 95.00th=[16188], 00:11:10.980 | 99.00th=[54789], 99.50th=[55313], 99.90th=[57934], 99.95th=[57934], 00:11:10.981 | 99.99th=[57934] 00:11:10.981 write: IOPS=5384, BW=21.0MiB/s (22.1MB/s)(22.0MiB/1046msec); 0 zone resets 00:11:10.981 slat (usec): min=3, max=9594, avg=83.29, stdev=441.54 00:11:10.981 clat (usec): min=5164, max=22752, avg=11069.31, stdev=1966.85 00:11:10.981 lat (usec): min=5170, max=23545, avg=11152.60, stdev=1993.80 00:11:10.981 clat percentiles (usec): 00:11:10.981 | 1.00th=[ 6128], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9503], 00:11:10.981 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:11:10.981 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13566], 95.00th=[14222], 00:11:10.981 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18482], 99.95th=[19268], 00:11:10.981 | 99.99th=[22676] 00:11:10.981 bw ( KiB/s): min=20480, max=24576, per=37.82%, avg=22528.00, stdev=2896.31, samples=2 00:11:10.981 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:11:10.981 lat (msec) : 4=0.01%, 10=27.67%, 20=70.96%, 50=0.56%, 100=0.79% 00:11:10.981 cpu : usr=5.84%, sys=8.71%, ctx=593, majf=0, minf=2 00:11:10.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:10.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.981 issued rwts: total=5606,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.981 job1: (groupid=0, jobs=1): err= 0: pid=819041: Tue Oct 1 01:28:50 2024 00:11:10.981 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:10.981 slat (usec): min=2, max=44680, avg=152.82, stdev=1260.24 00:11:10.981 clat (usec): min=5014, max=75768, avg=20248.76, stdev=14885.10 00:11:10.981 lat (usec): min=5019, max=75776, avg=20401.58, stdev=14949.40 00:11:10.981 clat percentiles (usec): 00:11:10.981 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[10945], 00:11:10.981 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13042], 60.00th=[15795], 00:11:10.981 | 70.00th=[22152], 80.00th=[28181], 90.00th=[37487], 95.00th=[56886], 00:11:10.981 | 99.00th=[74974], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:11:10.981 | 99.99th=[76022] 00:11:10.981 write: IOPS=3903, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1005msec); 0 zone resets 00:11:10.981 slat (usec): min=3, max=6277, avg=106.26, stdev=512.97 00:11:10.981 clat (usec): min=786, max=34476, avg=13907.52, stdev=5823.70 00:11:10.981 lat (usec): min=977, max=34495, avg=14013.77, stdev=5856.47 00:11:10.981 clat percentiles (usec): 00:11:10.981 | 1.00th=[ 4752], 5.00th=[ 7111], 10.00th=[ 8848], 20.00th=[10814], 00:11:10.981 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12518], 00:11:10.981 | 70.00th=[15401], 80.00th=[17171], 90.00th=[20841], 95.00th=[28967], 00:11:10.981 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:11:10.981 | 99.99th=[34341] 00:11:10.981 bw ( KiB/s): min=13976, max=16384, per=25.48%, avg=15180.00, stdev=1702.71, samples=2 00:11:10.981 iops : min= 3494, max= 4096, avg=3795.00, stdev=425.68, samples=2 00:11:10.981 lat (usec) : 1000=0.08% 00:11:10.981 lat (msec) : 4=0.23%, 10=10.06%, 20=68.87%, 50=17.76%, 100=3.01% 00:11:10.981 cpu : usr=3.49%, sys=5.18%, ctx=408, majf=0, minf=1 00:11:10.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:10.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.981 issued rwts: total=3584,3923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.981 job2: (groupid=0, jobs=1): err= 0: pid=819042: Tue Oct 1 01:28:50 2024 00:11:10.981 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:10.981 slat (usec): min=2, max=13269, avg=132.27, stdev=819.49 00:11:10.981 clat (usec): min=5155, max=43134, avg=17003.84, stdev=5452.11 00:11:10.981 lat (usec): min=5163, max=43141, avg=17136.10, stdev=5515.04 00:11:10.981 clat percentiles (usec): 00:11:10.981 | 1.00th=[ 8029], 5.00th=[11863], 10.00th=[12256], 20.00th=[13698], 00:11:10.981 | 30.00th=[13829], 40.00th=[14746], 50.00th=[15926], 60.00th=[16188], 00:11:10.981 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23200], 95.00th=[27919], 00:11:10.981 | 99.00th=[38536], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:11:10.981 | 99.99th=[43254] 00:11:10.981 write: IOPS=3284, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1004msec); 0 zone resets 00:11:10.981 slat (usec): min=4, max=12986, avg=164.35, stdev=737.49 00:11:10.981 clat (usec): min=3564, max=43137, avg=22781.33, stdev=9135.74 00:11:10.981 lat (usec): min=3738, max=43147, avg=22945.68, stdev=9202.30 00:11:10.981 clat percentiles (usec): 00:11:10.981 | 1.00th=[ 5407], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[12649], 00:11:10.981 | 30.00th=[15008], 40.00th=[19792], 50.00th=[25035], 60.00th=[26346], 00:11:10.981 | 70.00th=[30540], 80.00th=[31589], 90.00th=[34341], 95.00th=[35914], 00:11:10.981 | 99.00th=[37487], 99.50th=[40109], 99.90th=[40633], 99.95th=[43254], 00:11:10.981 | 99.99th=[43254] 00:11:10.981 bw ( KiB/s): min=12328, max=13040, per=21.29%, avg=12684.00, stdev=503.46, samples=2 00:11:10.981 iops : min= 3082, max= 3260, avg=3171.00, stdev=125.87, samples=2 00:11:10.981 lat (msec) : 4=0.11%, 10=3.88%, 20=56.25%, 50=39.76% 00:11:10.981 cpu : usr=3.49%, sys=6.08%, ctx=371, majf=0, minf=1 00:11:10.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:10.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.981 issued rwts: total=3072,3298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.981 job3: (groupid=0, jobs=1): err= 0: pid=819043: Tue Oct 1 01:28:50 2024 00:11:10.981 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:10.981 slat (usec): min=3, max=23766, avg=152.83, stdev=920.41 00:11:10.981 clat (msec): min=9, max=106, avg=21.10, stdev=16.40 00:11:10.981 lat (msec): min=9, max=106, avg=21.25, stdev=16.48 00:11:10.981 clat percentiles (msec): 00:11:10.981 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 13], 00:11:10.981 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 19], 00:11:10.981 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 40], 95.00th=[ 48], 00:11:10.981 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 107], 99.95th=[ 107], 00:11:10.981 | 99.99th=[ 107] 00:11:10.981 write: IOPS=2710, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec); 0 zone resets 00:11:10.981 slat (usec): min=4, max=28395, avg=213.99, stdev=1280.48 00:11:10.981 clat (msec): min=3, max=101, avg=26.44, stdev=18.18 00:11:10.981 lat (msec): min=4, max=101, avg=26.65, stdev=18.33 00:11:10.981 clat percentiles (msec): 00:11:10.981 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:11:10.981 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 20], 60.00th=[ 27], 00:11:10.981 | 70.00th=[ 30], 80.00th=[ 35], 90.00th=[ 59], 95.00th=[ 67], 00:11:10.981 | 99.00th=[ 81], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:11:10.981 | 99.99th=[ 103] 00:11:10.981 bw ( KiB/s): min= 8192, max=12584, per=17.44%, avg=10388.00, stdev=3105.61, samples=2 00:11:10.981 iops : min= 2048, max= 3146, avg=2597.00, stdev=776.40, samples=2 00:11:10.981 lat (msec) : 4=0.02%, 10=2.08%, 20=56.32%, 50=33.35%, 100=7.48% 00:11:10.981 lat (msec) : 250=0.76% 00:11:10.981 cpu : usr=2.89%, sys=5.98%, ctx=339, majf=0, minf=1 00:11:10.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:10.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.981 issued rwts: total=2560,2724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.981 00:11:10.981 Run status group 0 (all jobs): 00:11:10.981 READ: bw=55.4MiB/s (58.0MB/s), 9.95MiB/s-20.9MiB/s (10.4MB/s-22.0MB/s), io=57.9MiB (60.7MB), run=1004-1046msec 00:11:10.981 WRITE: bw=58.2MiB/s (61.0MB/s), 10.6MiB/s-21.0MiB/s (11.1MB/s-22.1MB/s), io=60.8MiB (63.8MB), run=1004-1046msec 00:11:10.981 00:11:10.981 Disk stats (read/write): 00:11:10.981 nvme0n1: ios=4658/5119, merge=0/0, ticks=18611/19554, in_queue=38165, util=86.27% 00:11:10.981 nvme0n2: ios=3144/3584, merge=0/0, ticks=19580/17937, in_queue=37517, util=97.26% 00:11:10.981 nvme0n3: ios=2598/2903, merge=0/0, ticks=38469/57984, in_queue=96453, util=100.00% 00:11:10.981 nvme0n4: ios=1746/2048, merge=0/0, ticks=13231/21449, in_queue=34680, util=89.70% 00:11:10.981 01:28:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:10.981 [global] 00:11:10.981 thread=1 00:11:10.981 invalidate=1 00:11:10.981 rw=randwrite 00:11:10.981 time_based=1 00:11:10.981 runtime=1 00:11:10.981 ioengine=libaio 00:11:10.981 direct=1 00:11:10.981 bs=4096 00:11:10.981 iodepth=128 00:11:10.981 norandommap=0 00:11:10.981 numjobs=1 00:11:10.981 00:11:10.981 verify_dump=1 00:11:10.981 verify_backlog=512 00:11:10.981 verify_state_save=0 00:11:10.981 do_verify=1 00:11:10.981 verify=crc32c-intel 00:11:10.981 [job0] 00:11:10.981 filename=/dev/nvme0n1 00:11:10.981 [job1] 00:11:10.981 filename=/dev/nvme0n2 00:11:10.981 [job2] 00:11:10.981 filename=/dev/nvme0n3 00:11:10.981 [job3] 00:11:10.981 filename=/dev/nvme0n4 00:11:10.981 Could not set queue depth (nvme0n1) 00:11:10.981 Could not set queue depth (nvme0n2) 00:11:10.981 Could not set queue depth (nvme0n3) 00:11:10.981 Could not set queue depth (nvme0n4) 00:11:11.238 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.238 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.238 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.238 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:11.238 fio-3.35 00:11:11.238 Starting 4 threads 00:11:12.617 00:11:12.617 job0: (groupid=0, jobs=1): err= 0: pid=819341: Tue Oct 1 01:28:52 2024 00:11:12.617 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:11:12.617 slat (usec): min=3, max=12298, avg=97.21, stdev=684.01 00:11:12.617 clat (usec): min=3722, max=29355, avg=12469.38, stdev=3066.09 00:11:12.617 lat (usec): min=3727, max=29362, avg=12566.59, stdev=3115.85 00:11:12.617 clat percentiles (usec): 00:11:12.617 | 1.00th=[ 6521], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10290], 00:11:12.617 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:11:12.617 | 70.00th=[13304], 80.00th=[14615], 90.00th=[17171], 95.00th=[18744], 00:11:12.617 | 99.00th=[21103], 99.50th=[22414], 99.90th=[24511], 99.95th=[24511], 00:11:12.617 | 99.99th=[29230] 00:11:12.617 write: IOPS=5436, BW=21.2MiB/s (22.3MB/s)(21.4MiB/1008msec); 0 zone resets 00:11:12.617 slat (usec): min=4, max=10967, avg=82.07, stdev=482.97 00:11:12.617 clat (usec): min=1289, max=36741, avg=11643.78, stdev=4242.04 00:11:12.617 lat (usec): min=1314, max=36760, avg=11725.85, stdev=4270.69 00:11:12.617 clat percentiles (usec): 00:11:12.617 | 1.00th=[ 3851], 5.00th=[ 6587], 10.00th=[ 7701], 20.00th=[10028], 00:11:12.617 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:12.617 | 70.00th=[11469], 80.00th=[11863], 90.00th=[15401], 95.00th=[18482], 00:11:12.617 | 99.00th=[32113], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:11:12.617 | 99.99th=[36963] 00:11:12.617 bw ( KiB/s): min=18896, max=23920, per=32.32%, avg=21408.00, stdev=3552.50, samples=2 00:11:12.617 iops : min= 4724, max= 5980, avg=5352.00, stdev=888.13, samples=2 00:11:12.617 lat (msec) : 2=0.01%, 4=0.63%, 10=14.11%, 20=82.11%, 50=3.13% 00:11:12.617 cpu : usr=6.85%, sys=10.92%, ctx=549, majf=0, minf=1 00:11:12.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:12.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.617 issued rwts: total=5120,5480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.617 job1: (groupid=0, jobs=1): err= 0: pid=819360: Tue Oct 1 01:28:52 2024 00:11:12.617 read: IOPS=5117, BW=20.0MiB/s (21.0MB/s)(20.2MiB/1011msec) 00:11:12.617 slat (usec): min=3, max=14768, avg=103.08, stdev=729.01 00:11:12.617 clat (usec): min=3897, max=27675, avg=12339.90, stdev=3072.83 00:11:12.617 lat (usec): min=3904, max=27691, avg=12442.98, stdev=3117.76 00:11:12.617 clat percentiles (usec): 00:11:12.617 | 1.00th=[ 4490], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10683], 00:11:12.618 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:11:12.618 | 70.00th=[12518], 80.00th=[13960], 90.00th=[17171], 95.00th=[19006], 00:11:12.618 | 99.00th=[22152], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:11:12.618 | 99.99th=[27657] 00:11:12.618 write: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec); 0 zone resets 00:11:12.618 slat (usec): min=4, max=15477, avg=76.56, stdev=398.89 00:11:12.618 clat (usec): min=2319, max=32496, avg=11397.37, stdev=3070.46 00:11:12.618 lat (usec): min=2327, max=32521, avg=11473.92, stdev=3101.51 00:11:12.618 clat percentiles (usec): 00:11:12.618 | 1.00th=[ 3064], 5.00th=[ 5407], 10.00th=[ 7570], 20.00th=[10421], 00:11:12.618 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:11:12.618 | 70.00th=[11994], 80.00th=[12125], 90.00th=[13435], 95.00th=[17695], 00:11:12.618 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21627], 99.95th=[23987], 00:11:12.618 | 99.99th=[32375] 00:11:12.618 bw ( KiB/s): min=20912, max=23560, per=33.57%, avg=22236.00, stdev=1872.42, samples=2 00:11:12.618 iops : min= 5228, max= 5890, avg=5559.00, stdev=468.10, samples=2 00:11:12.618 lat (msec) : 4=1.34%, 10=12.29%, 20=83.85%, 50=2.52% 00:11:12.618 cpu : usr=3.47%, sys=7.62%, ctx=721, majf=0, minf=1 00:11:12.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:12.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.618 issued rwts: total=5174,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.618 job2: (groupid=0, jobs=1): err= 0: pid=819389: Tue Oct 1 01:28:52 2024 00:11:12.618 read: IOPS=3257, BW=12.7MiB/s (13.3MB/s)(12.9MiB/1011msec) 00:11:12.618 slat (usec): min=2, max=20432, avg=149.35, stdev=1133.21 00:11:12.618 clat (usec): min=2730, max=63216, avg=19107.70, stdev=6952.59 00:11:12.618 lat (usec): min=2736, max=63228, avg=19257.05, stdev=7045.64 00:11:12.618 clat percentiles (usec): 00:11:12.618 | 1.00th=[ 7898], 5.00th=[10683], 10.00th=[11994], 20.00th=[14484], 00:11:12.618 | 30.00th=[14877], 40.00th=[17957], 50.00th=[19268], 60.00th=[20055], 00:11:12.618 | 70.00th=[21103], 80.00th=[21627], 90.00th=[25035], 95.00th=[27395], 00:11:12.618 | 99.00th=[53740], 99.50th=[57410], 99.90th=[63177], 99.95th=[63177], 00:11:12.618 | 99.99th=[63177] 00:11:12.618 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:11:12.618 slat (usec): min=3, max=16262, avg=125.76, stdev=848.76 00:11:12.618 clat (usec): min=686, max=63234, avg=18213.92, stdev=9600.10 00:11:12.618 lat (usec): min=691, max=63264, avg=18339.68, stdev=9674.02 00:11:12.618 clat percentiles (usec): 00:11:12.618 | 1.00th=[ 1139], 5.00th=[ 5145], 10.00th=[ 8455], 20.00th=[12256], 00:11:12.618 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15926], 60.00th=[16909], 00:11:12.618 | 70.00th=[20055], 80.00th=[22414], 90.00th=[33817], 95.00th=[39584], 00:11:12.618 | 99.00th=[44827], 99.50th=[46924], 99.90th=[48497], 99.95th=[63177], 00:11:12.618 | 99.99th=[63177] 00:11:12.618 bw ( KiB/s): min=14184, max=14488, per=21.64%, avg=14336.00, stdev=214.96, samples=2 00:11:12.618 iops : min= 3546, max= 3622, avg=3584.00, stdev=53.74, samples=2 00:11:12.618 lat (usec) : 750=0.04%, 1000=0.06% 00:11:12.618 lat (msec) : 2=1.88%, 4=0.60%, 10=6.43%, 20=55.78%, 50=34.65% 00:11:12.618 lat (msec) : 100=0.57% 00:11:12.618 cpu : usr=4.65%, sys=6.53%, ctx=302, majf=0, minf=1 00:11:12.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:12.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.618 issued rwts: total=3293,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.618 job3: (groupid=0, jobs=1): err= 0: pid=819390: Tue Oct 1 01:28:52 2024 00:11:12.618 read: IOPS=1703, BW=6813KiB/s (6977kB/s)(6888KiB/1011msec) 00:11:12.618 slat (usec): min=2, max=48792, avg=275.40, stdev=2158.93 00:11:12.618 clat (msec): min=3, max=122, avg=33.56, stdev=22.32 00:11:12.618 lat (msec): min=12, max=122, avg=33.84, stdev=22.48 00:11:12.618 clat percentiles (msec): 00:11:12.618 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 22], 00:11:12.618 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 25], 00:11:12.618 | 70.00th=[ 33], 80.00th=[ 45], 90.00th=[ 59], 95.00th=[ 86], 00:11:12.618 | 99.00th=[ 106], 99.50th=[ 106], 99.90th=[ 123], 99.95th=[ 123], 00:11:12.618 | 99.99th=[ 123] 00:11:12.618 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:11:12.618 slat (usec): min=3, max=32508, avg=253.37, stdev=1950.73 00:11:12.618 clat (usec): min=12551, max=94429, avg=34173.61, stdev=16728.80 00:11:12.618 lat (usec): min=12562, max=94445, avg=34426.98, stdev=16903.44 00:11:12.618 clat percentiles (usec): 00:11:12.618 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13960], 20.00th=[16909], 00:11:12.618 | 30.00th=[20579], 40.00th=[23987], 50.00th=[35914], 60.00th=[38011], 00:11:12.618 | 70.00th=[41157], 80.00th=[49021], 90.00th=[60556], 95.00th=[61080], 00:11:12.618 | 99.00th=[72877], 99.50th=[72877], 99.90th=[81265], 99.95th=[81265], 00:11:12.618 | 99.99th=[94897] 00:11:12.618 bw ( KiB/s): min= 8192, max= 8192, per=12.37%, avg=8192.00, stdev= 0.00, samples=2 00:11:12.618 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:12.618 lat (msec) : 4=0.03%, 20=23.26%, 50=59.79%, 100=14.67%, 250=2.25% 00:11:12.618 cpu : usr=1.68%, sys=2.48%, ctx=179, majf=0, minf=1 00:11:12.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:11:12.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.618 issued rwts: total=1722,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.618 00:11:12.618 Run status group 0 (all jobs): 00:11:12.618 READ: bw=59.1MiB/s (62.0MB/s), 6813KiB/s-20.0MiB/s (6977kB/s-21.0MB/s), io=59.8MiB (62.7MB), run=1008-1011msec 00:11:12.618 WRITE: bw=64.7MiB/s (67.8MB/s), 8103KiB/s-21.8MiB/s (8297kB/s-22.8MB/s), io=65.4MiB (68.6MB), run=1008-1011msec 00:11:12.618 00:11:12.618 Disk stats (read/write): 00:11:12.618 nvme0n1: ios=4389/4608, merge=0/0, ticks=51075/52705, in_queue=103780, util=97.80% 00:11:12.618 nvme0n2: ios=4466/4608, merge=0/0, ticks=53608/51657, in_queue=105265, util=96.14% 00:11:12.618 nvme0n3: ios=2560/3071, merge=0/0, ticks=46596/56737, in_queue=103333, util=88.94% 00:11:12.618 nvme0n4: ios=1561/1631, merge=0/0, ticks=18342/19580, in_queue=37922, util=91.05% 00:11:12.618 01:28:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:12.618 01:28:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=819527 00:11:12.618 01:28:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:12.618 01:28:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:12.618 [global] 00:11:12.618 thread=1 00:11:12.618 invalidate=1 00:11:12.618 rw=read 00:11:12.618 time_based=1 00:11:12.618 runtime=10 00:11:12.618 ioengine=libaio 00:11:12.618 direct=1 00:11:12.618 bs=4096 00:11:12.618 iodepth=1 00:11:12.618 norandommap=1 00:11:12.618 numjobs=1 00:11:12.618 00:11:12.618 [job0] 00:11:12.618 filename=/dev/nvme0n1 00:11:12.618 [job1] 00:11:12.618 filename=/dev/nvme0n2 00:11:12.618 [job2] 00:11:12.618 filename=/dev/nvme0n3 00:11:12.618 [job3] 00:11:12.618 filename=/dev/nvme0n4 00:11:12.618 Could not set queue depth (nvme0n1) 00:11:12.618 Could not set queue depth (nvme0n2) 00:11:12.618 Could not set queue depth (nvme0n3) 00:11:12.618 Could not set queue depth (nvme0n4) 00:11:12.618 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.618 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.618 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.618 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.618 fio-3.35 00:11:12.618 Starting 4 threads 00:11:15.896 01:28:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:15.896 01:28:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:15.896 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35799040, buflen=4096 00:11:15.896 fio: pid=819618, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.896 01:28:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.896 01:28:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:15.896 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16773120, buflen=4096 00:11:15.896 fio: pid=819617, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.153 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.153 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:16.410 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16662528, buflen=4096 00:11:16.410 fio: pid=819615, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.668 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.668 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:16.668 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7565312, buflen=4096 00:11:16.668 fio: pid=819616, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.668 00:11:16.668 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=819615: Tue Oct 1 01:28:56 2024 00:11:16.668 read: IOPS=1176, BW=4704KiB/s (4817kB/s)(15.9MiB/3459msec) 00:11:16.668 slat (usec): min=4, max=15815, avg=24.15, stdev=387.68 00:11:16.668 clat (usec): min=222, max=42007, avg=816.61, stdev=4371.01 00:11:16.668 lat (usec): min=230, max=56957, avg=836.89, stdev=4417.14 00:11:16.668 clat percentiles (usec): 00:11:16.668 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 265], 00:11:16.668 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 371], 00:11:16.668 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 441], 00:11:16.668 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:16.668 | 99.99th=[42206] 00:11:16.668 bw ( KiB/s): min= 96, max=10448, per=21.57%, avg=4320.00, stdev=4599.21, samples=6 00:11:16.668 iops : min= 24, max= 2612, avg=1080.00, stdev=1149.80, samples=6 00:11:16.668 lat (usec) : 250=11.48%, 500=86.04%, 750=1.30% 00:11:16.668 lat (msec) : 50=1.16% 00:11:16.668 cpu : usr=1.13%, sys=2.26%, ctx=4072, majf=0, minf=1 00:11:16.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 issued rwts: total=4069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.668 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=819616: Tue Oct 1 01:28:56 2024 00:11:16.668 read: IOPS=493, BW=1973KiB/s (2021kB/s)(7388KiB/3744msec) 00:11:16.668 slat (usec): min=5, max=10937, avg=20.62, stdev=303.44 00:11:16.668 clat (usec): min=226, max=42160, avg=1990.79, stdev=8198.73 00:11:16.668 lat (usec): min=232, max=52019, avg=2011.41, stdev=8258.03 00:11:16.668 clat percentiles (usec): 00:11:16.668 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:11:16.668 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 285], 00:11:16.668 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 379], 00:11:16.668 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:16.668 | 99.99th=[42206] 00:11:16.668 bw ( KiB/s): min= 96, max= 8896, per=10.50%, avg=2104.43, stdev=3436.25, samples=7 00:11:16.668 iops : min= 24, max= 2224, avg=526.00, stdev=859.14, samples=7 00:11:16.668 lat (usec) : 250=12.01%, 500=83.50%, 750=0.11%, 1000=0.11% 00:11:16.668 lat (msec) : 2=0.05%, 50=4.17% 00:11:16.668 cpu : usr=0.35%, sys=0.75%, ctx=1851, majf=0, minf=1 00:11:16.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.668 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=819617: Tue Oct 1 01:28:56 2024 00:11:16.668 read: IOPS=1288, BW=5151KiB/s (5275kB/s)(16.0MiB/3180msec) 00:11:16.668 slat (usec): min=5, max=10717, avg=15.63, stdev=206.20 00:11:16.668 clat (usec): min=220, max=42986, avg=752.18, stdev=4248.13 00:11:16.668 lat (usec): min=232, max=43028, avg=767.82, stdev=4254.18 00:11:16.668 clat percentiles (usec): 00:11:16.668 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:11:16.668 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 289], 00:11:16.668 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 420], 00:11:16.668 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:16.668 | 99.99th=[42730] 00:11:16.668 bw ( KiB/s): min= 96, max=14608, per=23.85%, avg=4777.33, stdev=6254.57, samples=6 00:11:16.668 iops : min= 24, max= 3652, avg=1194.33, stdev=1563.64, samples=6 00:11:16.668 lat (usec) : 250=14.97%, 500=83.01%, 750=0.88% 00:11:16.668 lat (msec) : 2=0.02%, 4=0.02%, 50=1.07% 00:11:16.668 cpu : usr=1.04%, sys=2.14%, ctx=4098, majf=0, minf=2 00:11:16.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 issued rwts: total=4096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.668 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=819618: Tue Oct 1 01:28:56 2024 00:11:16.668 read: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(34.1MiB/2896msec) 00:11:16.668 slat (nsec): min=5681, max=65855, avg=12852.64, stdev=5824.06 00:11:16.668 clat (usec): min=228, max=41960, avg=311.71, stdev=449.93 00:11:16.668 lat (usec): min=236, max=41970, avg=324.56, stdev=450.19 00:11:16.668 clat percentiles (usec): 00:11:16.668 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:11:16.668 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 306], 00:11:16.668 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 392], 00:11:16.668 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 685], 00:11:16.668 | 99.99th=[42206] 00:11:16.668 bw ( KiB/s): min= 9624, max=13696, per=59.18%, avg=11854.40, stdev=1605.86, samples=5 00:11:16.668 iops : min= 2406, max= 3424, avg=2963.60, stdev=401.47, samples=5 00:11:16.668 lat (usec) : 250=3.39%, 500=95.17%, 750=1.40%, 1000=0.01% 00:11:16.668 lat (msec) : 4=0.01%, 50=0.01% 00:11:16.668 cpu : usr=2.21%, sys=6.46%, ctx=8741, majf=0, minf=2 00:11:16.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.668 issued rwts: total=8741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.668 00:11:16.668 Run status group 0 (all jobs): 00:11:16.668 READ: bw=19.6MiB/s (20.5MB/s), 1973KiB/s-11.8MiB/s (2021kB/s-12.4MB/s), io=73.2MiB (76.8MB), run=2896-3744msec 00:11:16.668 00:11:16.668 Disk stats (read/write): 00:11:16.668 nvme0n1: ios=3833/0, merge=0/0, ticks=3195/0, in_queue=3195, util=95.39% 00:11:16.668 nvme0n2: ios=1844/0, merge=0/0, ticks=3537/0, in_queue=3537, util=96.03% 00:11:16.668 nvme0n3: ios=3912/0, merge=0/0, ticks=2992/0, in_queue=2992, util=96.50% 00:11:16.668 nvme0n4: ios=8673/0, merge=0/0, ticks=2608/0, in_queue=2608, util=96.74% 00:11:16.926 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.926 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:17.184 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.184 01:28:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:17.441 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.441 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:17.699 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.699 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 819527 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.956 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.213 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:18.213 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.213 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:18.213 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:18.213 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:18.213 nvmf hotplug test: fio failed as expected 00:11:18.213 01:28:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.471 rmmod nvme_tcp 00:11:18.471 rmmod nvme_fabrics 00:11:18.471 rmmod nvme_keyring 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 817489 ']' 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 817489 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 817489 ']' 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 817489 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 817489 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 817489' 00:11:18.471 killing process with pid 817489 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 817489 00:11:18.471 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 817489 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.730 01:28:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.265 00:11:21.265 real 0m24.161s 00:11:21.265 user 1m23.401s 00:11:21.265 sys 0m8.092s 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.265 ************************************ 00:11:21.265 END TEST nvmf_fio_target 00:11:21.265 ************************************ 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.265 ************************************ 00:11:21.265 START TEST nvmf_bdevio 00:11:21.265 ************************************ 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.265 * Looking for test storage... 00:11:21.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.265 --rc genhtml_branch_coverage=1 00:11:21.265 --rc genhtml_function_coverage=1 00:11:21.265 --rc genhtml_legend=1 00:11:21.265 --rc geninfo_all_blocks=1 00:11:21.265 --rc geninfo_unexecuted_blocks=1 00:11:21.265 00:11:21.265 ' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.265 --rc genhtml_branch_coverage=1 00:11:21.265 --rc genhtml_function_coverage=1 00:11:21.265 --rc genhtml_legend=1 00:11:21.265 --rc geninfo_all_blocks=1 00:11:21.265 --rc geninfo_unexecuted_blocks=1 00:11:21.265 00:11:21.265 ' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.265 --rc genhtml_branch_coverage=1 00:11:21.265 --rc genhtml_function_coverage=1 00:11:21.265 --rc genhtml_legend=1 00:11:21.265 --rc geninfo_all_blocks=1 00:11:21.265 --rc geninfo_unexecuted_blocks=1 00:11:21.265 00:11:21.265 ' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.265 --rc genhtml_branch_coverage=1 00:11:21.265 --rc genhtml_function_coverage=1 00:11:21.265 --rc genhtml_legend=1 00:11:21.265 --rc geninfo_all_blocks=1 00:11:21.265 --rc geninfo_unexecuted_blocks=1 00:11:21.265 00:11:21.265 ' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.265 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.266 01:29:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:23.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:23.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:23.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.167 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:23.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:23.168 00:11:23.168 --- 10.0.0.2 ping statistics --- 00:11:23.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.168 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:23.168 00:11:23.168 --- 10.0.0.1 ping statistics --- 00:11:23.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.168 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=822268 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 822268 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 822268 ']' 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.168 01:29:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.168 [2024-10-01 01:29:02.938536] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:23.168 [2024-10-01 01:29:02.938623] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.168 [2024-10-01 01:29:03.013094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.426 [2024-10-01 01:29:03.106383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.426 [2024-10-01 01:29:03.106444] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.426 [2024-10-01 01:29:03.106461] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.426 [2024-10-01 01:29:03.106474] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.426 [2024-10-01 01:29:03.106485] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.426 [2024-10-01 01:29:03.106569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.426 [2024-10-01 01:29:03.106623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:23.426 [2024-10-01 01:29:03.106687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:23.426 [2024-10-01 01:29:03.106691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.426 [2024-10-01 01:29:03.263914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.426 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 Malloc0 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 [2024-10-01 01:29:03.317495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:23.684 { 00:11:23.684 "params": { 00:11:23.684 "name": "Nvme$subsystem", 00:11:23.684 "trtype": "$TEST_TRANSPORT", 00:11:23.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:23.684 "adrfam": "ipv4", 00:11:23.684 "trsvcid": "$NVMF_PORT", 00:11:23.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:23.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:23.684 "hdgst": ${hdgst:-false}, 00:11:23.684 "ddgst": ${ddgst:-false} 00:11:23.684 }, 00:11:23.684 "method": "bdev_nvme_attach_controller" 00:11:23.684 } 00:11:23.684 EOF 00:11:23.684 )") 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:23.684 01:29:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:23.684 "params": { 00:11:23.684 "name": "Nvme1", 00:11:23.684 "trtype": "tcp", 00:11:23.684 "traddr": "10.0.0.2", 00:11:23.684 "adrfam": "ipv4", 00:11:23.684 "trsvcid": "4420", 00:11:23.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:23.684 "hdgst": false, 00:11:23.684 "ddgst": false 00:11:23.684 }, 00:11:23.684 "method": "bdev_nvme_attach_controller" 00:11:23.684 }' 00:11:23.684 [2024-10-01 01:29:03.368250] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:23.684 [2024-10-01 01:29:03.368344] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822298 ] 00:11:23.684 [2024-10-01 01:29:03.429244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.684 [2024-10-01 01:29:03.521051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.684 [2024-10-01 01:29:03.521104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.684 [2024-10-01 01:29:03.521108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.249 I/O targets: 00:11:24.249 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:24.249 00:11:24.249 00:11:24.249 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.249 http://cunit.sourceforge.net/ 00:11:24.249 00:11:24.249 00:11:24.249 Suite: bdevio tests on: Nvme1n1 00:11:24.249 Test: blockdev write read block ...passed 00:11:24.249 Test: blockdev write zeroes read block ...passed 00:11:24.249 Test: blockdev write zeroes read no split ...passed 00:11:24.249 Test: blockdev write zeroes read split ...passed 00:11:24.249 Test: blockdev write zeroes read split partial ...passed 00:11:24.249 Test: blockdev reset ...[2024-10-01 01:29:04.018901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:24.249 [2024-10-01 01:29:04.019020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4fe90 (9): Bad file descriptor 00:11:24.249 [2024-10-01 01:29:04.089842] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:24.249 passed 00:11:24.505 Test: blockdev write read 8 blocks ...passed 00:11:24.505 Test: blockdev write read size > 128k ...passed 00:11:24.505 Test: blockdev write read invalid size ...passed 00:11:24.505 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.505 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.505 Test: blockdev write read max offset ...passed 00:11:24.505 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.505 Test: blockdev writev readv 8 blocks ...passed 00:11:24.505 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.505 Test: blockdev writev readv block ...passed 00:11:24.505 Test: blockdev writev readv size > 128k ...passed 00:11:24.505 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.505 Test: blockdev comparev and writev ...[2024-10-01 01:29:04.304730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.304765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.304790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.304808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.305192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.305217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.305238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.305255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.305614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.305638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.305659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.305676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.306049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.306074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:24.505 [2024-10-01 01:29:04.306095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:24.505 [2024-10-01 01:29:04.306112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:24.505 passed 00:11:24.761 Test: blockdev nvme passthru rw ...passed 00:11:24.761 Test: blockdev nvme passthru vendor specific ...[2024-10-01 01:29:04.389337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.761 [2024-10-01 01:29:04.389365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:24.761 [2024-10-01 01:29:04.389543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.761 [2024-10-01 01:29:04.389565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:24.761 [2024-10-01 01:29:04.389739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.761 [2024-10-01 01:29:04.389761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:24.761 [2024-10-01 01:29:04.389940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:24.761 [2024-10-01 01:29:04.389970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:24.761 passed 00:11:24.761 Test: blockdev nvme admin passthru ...passed 00:11:24.761 Test: blockdev copy ...passed 00:11:24.762 00:11:24.762 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.762 suites 1 1 n/a 0 0 00:11:24.762 tests 23 23 23 0 0 00:11:24.762 asserts 152 152 152 0 n/a 00:11:24.762 00:11:24.762 Elapsed time = 1.184 seconds 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.028 rmmod nvme_tcp 00:11:25.028 rmmod nvme_fabrics 00:11:25.028 rmmod nvme_keyring 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 822268 ']' 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 822268 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 822268 ']' 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 822268 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 822268 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 822268' 00:11:25.028 killing process with pid 822268 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 822268 00:11:25.028 01:29:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 822268 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.287 01:29:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.817 01:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.817 00:11:27.817 real 0m6.535s 00:11:27.817 user 0m10.995s 00:11:27.817 sys 0m2.140s 00:11:27.817 01:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.817 01:29:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.817 ************************************ 00:11:27.817 END TEST nvmf_bdevio 00:11:27.817 ************************************ 00:11:27.817 01:29:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:27.817 00:11:27.817 real 3m55.980s 00:11:27.817 user 10m9.853s 00:11:27.817 sys 1m11.066s 00:11:27.817 01:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.817 01:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.817 ************************************ 00:11:27.817 END TEST nvmf_target_core 00:11:27.817 ************************************ 00:11:27.817 01:29:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.817 01:29:07 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.817 01:29:07 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.817 01:29:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.818 ************************************ 00:11:27.818 START TEST nvmf_target_extra 00:11:27.818 ************************************ 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.818 * Looking for test storage... 00:11:27.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.818 --rc genhtml_branch_coverage=1 00:11:27.818 --rc genhtml_function_coverage=1 00:11:27.818 --rc genhtml_legend=1 00:11:27.818 --rc geninfo_all_blocks=1 00:11:27.818 --rc geninfo_unexecuted_blocks=1 00:11:27.818 00:11:27.818 ' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.818 --rc genhtml_branch_coverage=1 00:11:27.818 --rc genhtml_function_coverage=1 00:11:27.818 --rc genhtml_legend=1 00:11:27.818 --rc geninfo_all_blocks=1 00:11:27.818 --rc geninfo_unexecuted_blocks=1 00:11:27.818 00:11:27.818 ' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.818 --rc genhtml_branch_coverage=1 00:11:27.818 --rc genhtml_function_coverage=1 00:11:27.818 --rc genhtml_legend=1 00:11:27.818 --rc geninfo_all_blocks=1 00:11:27.818 --rc geninfo_unexecuted_blocks=1 00:11:27.818 00:11:27.818 ' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.818 --rc genhtml_branch_coverage=1 00:11:27.818 --rc genhtml_function_coverage=1 00:11:27.818 --rc genhtml_legend=1 00:11:27.818 --rc geninfo_all_blocks=1 00:11:27.818 --rc geninfo_unexecuted_blocks=1 00:11:27.818 00:11:27.818 ' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.818 ************************************ 00:11:27.818 START TEST nvmf_example 00:11:27.818 ************************************ 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:27.818 * Looking for test storage... 00:11:27.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.818 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.819 --rc genhtml_branch_coverage=1 00:11:27.819 --rc genhtml_function_coverage=1 00:11:27.819 --rc genhtml_legend=1 00:11:27.819 --rc geninfo_all_blocks=1 00:11:27.819 --rc geninfo_unexecuted_blocks=1 00:11:27.819 00:11:27.819 ' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.819 --rc genhtml_branch_coverage=1 00:11:27.819 --rc genhtml_function_coverage=1 00:11:27.819 --rc genhtml_legend=1 00:11:27.819 --rc geninfo_all_blocks=1 00:11:27.819 --rc geninfo_unexecuted_blocks=1 00:11:27.819 00:11:27.819 ' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.819 --rc genhtml_branch_coverage=1 00:11:27.819 --rc genhtml_function_coverage=1 00:11:27.819 --rc genhtml_legend=1 00:11:27.819 --rc geninfo_all_blocks=1 00:11:27.819 --rc geninfo_unexecuted_blocks=1 00:11:27.819 00:11:27.819 ' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.819 --rc genhtml_branch_coverage=1 00:11:27.819 --rc genhtml_function_coverage=1 00:11:27.819 --rc genhtml_legend=1 00:11:27.819 --rc geninfo_all_blocks=1 00:11:27.819 --rc geninfo_unexecuted_blocks=1 00:11:27.819 00:11:27.819 ' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.819 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.820 01:29:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:29.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:29.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:29.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.718 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:29.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.719 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.976 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.976 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.976 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.976 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.976 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:29.977 00:11:29.977 --- 10.0.0.2 ping statistics --- 00:11:29.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.977 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:11:29.977 00:11:29.977 --- 10.0.0.1 ping statistics --- 00:11:29.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.977 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=824552 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 824552 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 824552 ']' 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.977 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.234 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.234 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:30.234 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:30.234 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.234 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.234 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.491 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.491 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:30.491 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.483 Initializing NVMe Controllers 00:11:40.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.483 Initialization complete. Launching workers. 00:11:40.483 ======================================================== 00:11:40.483 Latency(us) 00:11:40.483 Device Information : IOPS MiB/s Average min max 00:11:40.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13721.37 53.60 4664.95 914.77 15238.32 00:11:40.483 ======================================================== 00:11:40.483 Total : 13721.37 53.60 4664.95 914.77 15238.32 00:11:40.483 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.483 rmmod nvme_tcp 00:11:40.483 rmmod nvme_fabrics 00:11:40.483 rmmod nvme_keyring 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 824552 ']' 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 824552 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 824552 ']' 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 824552 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.483 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 824552 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 824552' 00:11:40.742 killing process with pid 824552 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 824552 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 824552 00:11:40.742 nvmf threads initialize successfully 00:11:40.742 bdev subsystem init successfully 00:11:40.742 created a nvmf target service 00:11:40.742 create targets's poll groups done 00:11:40.742 all subsystems of target started 00:11:40.742 nvmf target is running 00:11:40.742 all subsystems of target stopped 00:11:40.742 destroy targets's poll groups done 00:11:40.742 destroyed the nvmf target service 00:11:40.742 bdev subsystem finish successfully 00:11:40.742 nvmf threads destroy successfully 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:40.742 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.001 01:29:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.902 00:11:42.902 real 0m15.302s 00:11:42.902 user 0m39.270s 00:11:42.902 sys 0m4.486s 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.902 ************************************ 00:11:42.902 END TEST nvmf_example 00:11:42.902 ************************************ 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.902 ************************************ 00:11:42.902 START TEST nvmf_filesystem 00:11:42.902 ************************************ 00:11:42.902 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:43.162 * Looking for test storage... 00:11:43.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.162 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:43.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.163 --rc genhtml_branch_coverage=1 00:11:43.163 --rc genhtml_function_coverage=1 00:11:43.163 --rc genhtml_legend=1 00:11:43.163 --rc geninfo_all_blocks=1 00:11:43.163 --rc geninfo_unexecuted_blocks=1 00:11:43.163 00:11:43.163 ' 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:43.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.163 --rc genhtml_branch_coverage=1 00:11:43.163 --rc genhtml_function_coverage=1 00:11:43.163 --rc genhtml_legend=1 00:11:43.163 --rc geninfo_all_blocks=1 00:11:43.163 --rc geninfo_unexecuted_blocks=1 00:11:43.163 00:11:43.163 ' 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:43.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.163 --rc genhtml_branch_coverage=1 00:11:43.163 --rc genhtml_function_coverage=1 00:11:43.163 --rc genhtml_legend=1 00:11:43.163 --rc geninfo_all_blocks=1 00:11:43.163 --rc geninfo_unexecuted_blocks=1 00:11:43.163 00:11:43.163 ' 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:43.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.163 --rc genhtml_branch_coverage=1 00:11:43.163 --rc genhtml_function_coverage=1 00:11:43.163 --rc genhtml_legend=1 00:11:43.163 --rc geninfo_all_blocks=1 00:11:43.163 --rc geninfo_unexecuted_blocks=1 00:11:43.163 00:11:43.163 ' 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.163 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:43.164 #define SPDK_CONFIG_H 00:11:43.164 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:43.164 #define SPDK_CONFIG_APPS 1 00:11:43.164 #define SPDK_CONFIG_ARCH native 00:11:43.164 #undef SPDK_CONFIG_ASAN 00:11:43.164 #undef SPDK_CONFIG_AVAHI 00:11:43.164 #undef SPDK_CONFIG_CET 00:11:43.164 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:43.164 #define SPDK_CONFIG_COVERAGE 1 00:11:43.164 #define SPDK_CONFIG_CROSS_PREFIX 00:11:43.164 #undef SPDK_CONFIG_CRYPTO 00:11:43.164 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:43.164 #undef SPDK_CONFIG_CUSTOMOCF 00:11:43.164 #undef SPDK_CONFIG_DAOS 00:11:43.164 #define SPDK_CONFIG_DAOS_DIR 00:11:43.164 #define SPDK_CONFIG_DEBUG 1 00:11:43.164 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:43.164 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:43.164 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:43.164 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.164 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:43.164 #undef SPDK_CONFIG_DPDK_UADK 00:11:43.164 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.164 #define SPDK_CONFIG_EXAMPLES 1 00:11:43.164 #undef SPDK_CONFIG_FC 00:11:43.164 #define SPDK_CONFIG_FC_PATH 00:11:43.164 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:43.164 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:43.164 #define SPDK_CONFIG_FSDEV 1 00:11:43.164 #undef SPDK_CONFIG_FUSE 00:11:43.164 #undef SPDK_CONFIG_FUZZER 00:11:43.164 #define SPDK_CONFIG_FUZZER_LIB 00:11:43.164 #undef SPDK_CONFIG_GOLANG 00:11:43.164 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:43.164 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:43.164 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:43.164 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:43.164 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:43.164 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:43.164 #undef SPDK_CONFIG_HAVE_LZ4 00:11:43.164 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:43.164 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:43.164 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:43.164 #define SPDK_CONFIG_IDXD 1 00:11:43.164 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:43.164 #undef SPDK_CONFIG_IPSEC_MB 00:11:43.164 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:43.164 #define SPDK_CONFIG_ISAL 1 00:11:43.164 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:43.164 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:43.164 #define SPDK_CONFIG_LIBDIR 00:11:43.164 #undef SPDK_CONFIG_LTO 00:11:43.164 #define SPDK_CONFIG_MAX_LCORES 128 00:11:43.164 #define SPDK_CONFIG_NVME_CUSE 1 00:11:43.164 #undef SPDK_CONFIG_OCF 00:11:43.164 #define SPDK_CONFIG_OCF_PATH 00:11:43.164 #define SPDK_CONFIG_OPENSSL_PATH 00:11:43.164 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:43.164 #define SPDK_CONFIG_PGO_DIR 00:11:43.164 #undef SPDK_CONFIG_PGO_USE 00:11:43.164 #define SPDK_CONFIG_PREFIX /usr/local 00:11:43.164 #undef SPDK_CONFIG_RAID5F 00:11:43.164 #undef SPDK_CONFIG_RBD 00:11:43.164 #define SPDK_CONFIG_RDMA 1 00:11:43.164 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:43.164 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:43.164 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:43.164 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:43.164 #define SPDK_CONFIG_SHARED 1 00:11:43.164 #undef SPDK_CONFIG_SMA 00:11:43.164 #define SPDK_CONFIG_TESTS 1 00:11:43.164 #undef SPDK_CONFIG_TSAN 00:11:43.164 #define SPDK_CONFIG_UBLK 1 00:11:43.164 #define SPDK_CONFIG_UBSAN 1 00:11:43.164 #undef SPDK_CONFIG_UNIT_TESTS 00:11:43.164 #undef SPDK_CONFIG_URING 00:11:43.164 #define SPDK_CONFIG_URING_PATH 00:11:43.164 #undef SPDK_CONFIG_URING_ZNS 00:11:43.164 #undef SPDK_CONFIG_USDT 00:11:43.164 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:43.164 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:43.164 #define SPDK_CONFIG_VFIO_USER 1 00:11:43.164 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:43.164 #define SPDK_CONFIG_VHOST 1 00:11:43.164 #define SPDK_CONFIG_VIRTIO 1 00:11:43.164 #undef SPDK_CONFIG_VTUNE 00:11:43.164 #define SPDK_CONFIG_VTUNE_DIR 00:11:43.164 #define SPDK_CONFIG_WERROR 1 00:11:43.164 #define SPDK_CONFIG_WPDK_DIR 00:11:43.164 #undef SPDK_CONFIG_XNVME 00:11:43.164 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.164 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:43.165 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:43.166 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 826131 ]] 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 826131 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.4nMsUj 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4nMsUj/tests/target /tmp/spdk.4nMsUj 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:43.167 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=52603805696 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988532224 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=9384726528 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982897664 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993371136 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994268160 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=897024 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:43.168 * Looking for test storage... 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=52603805696 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=11599319040 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:43.168 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.426 --rc genhtml_branch_coverage=1 00:11:43.426 --rc genhtml_function_coverage=1 00:11:43.426 --rc genhtml_legend=1 00:11:43.426 --rc geninfo_all_blocks=1 00:11:43.426 --rc geninfo_unexecuted_blocks=1 00:11:43.426 00:11:43.426 ' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.426 --rc genhtml_branch_coverage=1 00:11:43.426 --rc genhtml_function_coverage=1 00:11:43.426 --rc genhtml_legend=1 00:11:43.426 --rc geninfo_all_blocks=1 00:11:43.426 --rc geninfo_unexecuted_blocks=1 00:11:43.426 00:11:43.426 ' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.426 --rc genhtml_branch_coverage=1 00:11:43.426 --rc genhtml_function_coverage=1 00:11:43.426 --rc genhtml_legend=1 00:11:43.426 --rc geninfo_all_blocks=1 00:11:43.426 --rc geninfo_unexecuted_blocks=1 00:11:43.426 00:11:43.426 ' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.426 --rc genhtml_branch_coverage=1 00:11:43.426 --rc genhtml_function_coverage=1 00:11:43.426 --rc genhtml_legend=1 00:11:43.426 --rc geninfo_all_blocks=1 00:11:43.426 --rc geninfo_unexecuted_blocks=1 00:11:43.426 00:11:43.426 ' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.426 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.427 01:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.327 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:45.328 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:45.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:45.328 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:45.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.328 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.586 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:11:45.587 00:11:45.587 --- 10.0.0.2 ping statistics --- 00:11:45.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.587 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:11:45.587 00:11:45.587 --- 10.0.0.1 ping statistics --- 00:11:45.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.587 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 ************************************ 00:11:45.587 START TEST nvmf_filesystem_no_in_capsule 00:11:45.587 ************************************ 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=827892 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 827892 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 827892 ']' 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.587 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.587 [2024-10-01 01:29:25.435132] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:45.587 [2024-10-01 01:29:25.435223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.845 [2024-10-01 01:29:25.503628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.845 [2024-10-01 01:29:25.594569] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.845 [2024-10-01 01:29:25.594635] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.845 [2024-10-01 01:29:25.594648] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.845 [2024-10-01 01:29:25.594659] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.845 [2024-10-01 01:29:25.594669] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.845 [2024-10-01 01:29:25.594762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.845 [2024-10-01 01:29:25.594823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.845 [2024-10-01 01:29:25.594889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.845 [2024-10-01 01:29:25.594891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 [2024-10-01 01:29:25.757237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.103 [2024-10-01 01:29:25.939120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.103 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.361 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.361 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:46.361 { 00:11:46.361 "name": "Malloc1", 00:11:46.361 "aliases": [ 00:11:46.361 "efedac64-7306-46b2-a29d-69fb63bdd679" 00:11:46.361 ], 00:11:46.361 "product_name": "Malloc disk", 00:11:46.361 "block_size": 512, 00:11:46.361 "num_blocks": 1048576, 00:11:46.361 "uuid": "efedac64-7306-46b2-a29d-69fb63bdd679", 00:11:46.361 "assigned_rate_limits": { 00:11:46.361 "rw_ios_per_sec": 0, 00:11:46.361 "rw_mbytes_per_sec": 0, 00:11:46.361 "r_mbytes_per_sec": 0, 00:11:46.361 "w_mbytes_per_sec": 0 00:11:46.361 }, 00:11:46.361 "claimed": true, 00:11:46.361 "claim_type": "exclusive_write", 00:11:46.361 "zoned": false, 00:11:46.361 "supported_io_types": { 00:11:46.361 "read": true, 00:11:46.361 "write": true, 00:11:46.361 "unmap": true, 00:11:46.361 "flush": true, 00:11:46.361 "reset": true, 00:11:46.361 "nvme_admin": false, 00:11:46.361 "nvme_io": false, 00:11:46.361 "nvme_io_md": false, 00:11:46.361 "write_zeroes": true, 00:11:46.361 "zcopy": true, 00:11:46.361 "get_zone_info": false, 00:11:46.361 "zone_management": false, 00:11:46.361 "zone_append": false, 00:11:46.361 "compare": false, 00:11:46.361 "compare_and_write": false, 00:11:46.361 "abort": true, 00:11:46.361 "seek_hole": false, 00:11:46.361 "seek_data": false, 00:11:46.361 "copy": true, 00:11:46.361 "nvme_iov_md": false 00:11:46.361 }, 00:11:46.361 "memory_domains": [ 00:11:46.361 { 00:11:46.361 "dma_device_id": "system", 00:11:46.361 "dma_device_type": 1 00:11:46.361 }, 00:11:46.361 { 00:11:46.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.361 "dma_device_type": 2 00:11:46.361 } 00:11:46.361 ], 00:11:46.361 "driver_specific": {} 00:11:46.361 } 00:11:46.361 ]' 00:11:46.361 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:46.361 01:29:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:46.361 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:46.361 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:46.361 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:46.361 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:46.361 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:46.361 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.926 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.926 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.926 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.926 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:46.926 01:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:49.450 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:49.450 01:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.381 ************************************ 00:11:50.381 START TEST filesystem_ext4 00:11:50.381 ************************************ 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:50.381 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:50.381 mke2fs 1.47.0 (5-Feb-2023) 00:11:50.639 Discarding device blocks: 0/522240 done 00:11:50.639 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.639 Filesystem UUID: edf7053e-ac30-4557-a582-9bace6edd48b 00:11:50.639 Superblock backups stored on blocks: 00:11:50.639 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.639 00:11:50.639 Allocating group tables: 0/64 done 00:11:50.639 Writing inode tables: 0/64 done 00:11:51.203 Creating journal (8192 blocks): done 00:11:51.203 Writing superblocks and filesystem accounting information: 0/64 done 00:11:51.203 00:11:51.203 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:51.203 01:29:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.454 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 827892 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.714 00:11:56.714 real 0m6.171s 00:11:56.714 user 0m0.024s 00:11:56.714 sys 0m0.057s 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:56.714 ************************************ 00:11:56.714 END TEST filesystem_ext4 00:11:56.714 ************************************ 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.714 ************************************ 00:11:56.714 START TEST filesystem_btrfs 00:11:56.714 ************************************ 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:56.714 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:57.278 btrfs-progs v6.8.1 00:11:57.278 See https://btrfs.readthedocs.io for more information. 00:11:57.278 00:11:57.278 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:57.278 NOTE: several default settings have changed in version 5.15, please make sure 00:11:57.278 this does not affect your deployments: 00:11:57.278 - DUP for metadata (-m dup) 00:11:57.278 - enabled no-holes (-O no-holes) 00:11:57.278 - enabled free-space-tree (-R free-space-tree) 00:11:57.278 00:11:57.278 Label: (null) 00:11:57.278 UUID: e6d8ad09-f90b-42eb-bb76-92fa298d5907 00:11:57.278 Node size: 16384 00:11:57.278 Sector size: 4096 (CPU page size: 4096) 00:11:57.278 Filesystem size: 510.00MiB 00:11:57.278 Block group profiles: 00:11:57.278 Data: single 8.00MiB 00:11:57.278 Metadata: DUP 32.00MiB 00:11:57.278 System: DUP 8.00MiB 00:11:57.278 SSD detected: yes 00:11:57.278 Zoned device: no 00:11:57.278 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:57.278 Checksum: crc32c 00:11:57.278 Number of devices: 1 00:11:57.278 Devices: 00:11:57.278 ID SIZE PATH 00:11:57.278 1 510.00MiB /dev/nvme0n1p1 00:11:57.278 00:11:57.278 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:57.278 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 827892 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.210 00:11:58.210 real 0m1.398s 00:11:58.210 user 0m0.019s 00:11:58.210 sys 0m0.101s 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.210 ************************************ 00:11:58.210 END TEST filesystem_btrfs 00:11:58.210 ************************************ 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.210 ************************************ 00:11:58.210 START TEST filesystem_xfs 00:11:58.210 ************************************ 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.210 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.211 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:58.211 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:58.211 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.211 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:58.211 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:58.211 = sectsz=512 attr=2, projid32bit=1 00:11:58.211 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:58.211 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:58.211 data = bsize=4096 blocks=130560, imaxpct=25 00:11:58.211 = sunit=0 swidth=0 blks 00:11:58.211 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:58.211 log =internal log bsize=4096 blocks=16384, version=2 00:11:58.211 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:58.211 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:59.142 Discarding blocks...Done. 00:11:59.142 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:59.142 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 827892 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.106 00:12:01.106 real 0m2.895s 00:12:01.106 user 0m0.020s 00:12:01.106 sys 0m0.056s 00:12:01.106 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.107 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.107 ************************************ 00:12:01.107 END TEST filesystem_xfs 00:12:01.107 ************************************ 00:12:01.107 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.364 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 827892 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 827892 ']' 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 827892 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 827892 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 827892' 00:12:01.365 killing process with pid 827892 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 827892 00:12:01.365 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 827892 00:12:01.955 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:01.955 00:12:01.955 real 0m16.283s 00:12:01.955 user 1m2.913s 00:12:01.955 sys 0m2.041s 00:12:01.955 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.956 ************************************ 00:12:01.956 END TEST nvmf_filesystem_no_in_capsule 00:12:01.956 ************************************ 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.956 ************************************ 00:12:01.956 START TEST nvmf_filesystem_in_capsule 00:12:01.956 ************************************ 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=829989 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 829989 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 829989 ']' 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.956 01:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.956 [2024-10-01 01:29:41.771257] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:01.956 [2024-10-01 01:29:41.771360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.214 [2024-10-01 01:29:41.841850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.214 [2024-10-01 01:29:41.932580] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.214 [2024-10-01 01:29:41.932638] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.214 [2024-10-01 01:29:41.932652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.214 [2024-10-01 01:29:41.932663] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.214 [2024-10-01 01:29:41.932673] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.214 [2024-10-01 01:29:41.932721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.214 [2024-10-01 01:29:41.932746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.214 [2024-10-01 01:29:41.932805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.214 [2024-10-01 01:29:41.932807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.214 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.214 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:02.214 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:02.214 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.214 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.471 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.471 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:02.471 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 [2024-10-01 01:29:42.075894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 Malloc1 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 [2024-10-01 01:29:42.245055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:02.472 { 00:12:02.472 "name": "Malloc1", 00:12:02.472 "aliases": [ 00:12:02.472 "8227d3f2-f500-4bf3-a054-8d89fe8c62b4" 00:12:02.472 ], 00:12:02.472 "product_name": "Malloc disk", 00:12:02.472 "block_size": 512, 00:12:02.472 "num_blocks": 1048576, 00:12:02.472 "uuid": "8227d3f2-f500-4bf3-a054-8d89fe8c62b4", 00:12:02.472 "assigned_rate_limits": { 00:12:02.472 "rw_ios_per_sec": 0, 00:12:02.472 "rw_mbytes_per_sec": 0, 00:12:02.472 "r_mbytes_per_sec": 0, 00:12:02.472 "w_mbytes_per_sec": 0 00:12:02.472 }, 00:12:02.472 "claimed": true, 00:12:02.472 "claim_type": "exclusive_write", 00:12:02.472 "zoned": false, 00:12:02.472 "supported_io_types": { 00:12:02.472 "read": true, 00:12:02.472 "write": true, 00:12:02.472 "unmap": true, 00:12:02.472 "flush": true, 00:12:02.472 "reset": true, 00:12:02.472 "nvme_admin": false, 00:12:02.472 "nvme_io": false, 00:12:02.472 "nvme_io_md": false, 00:12:02.472 "write_zeroes": true, 00:12:02.472 "zcopy": true, 00:12:02.472 "get_zone_info": false, 00:12:02.472 "zone_management": false, 00:12:02.472 "zone_append": false, 00:12:02.472 "compare": false, 00:12:02.472 "compare_and_write": false, 00:12:02.472 "abort": true, 00:12:02.472 "seek_hole": false, 00:12:02.472 "seek_data": false, 00:12:02.472 "copy": true, 00:12:02.472 "nvme_iov_md": false 00:12:02.472 }, 00:12:02.472 "memory_domains": [ 00:12:02.472 { 00:12:02.472 "dma_device_id": "system", 00:12:02.472 "dma_device_type": 1 00:12:02.472 }, 00:12:02.472 { 00:12:02.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.472 "dma_device_type": 2 00:12:02.472 } 00:12:02.472 ], 00:12:02.472 "driver_specific": {} 00:12:02.472 } 00:12:02.472 ]' 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:02.472 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:02.729 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:02.729 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:02.729 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:02.729 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:02.729 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.294 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.294 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.294 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.294 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:03.294 01:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:05.189 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:05.446 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:05.703 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.075 ************************************ 00:12:07.075 START TEST filesystem_in_capsule_ext4 00:12:07.075 ************************************ 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:07.075 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:07.075 mke2fs 1.47.0 (5-Feb-2023) 00:12:07.075 Discarding device blocks: 0/522240 done 00:12:07.075 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.075 Filesystem UUID: 7f8a0aa3-1f9a-491d-80df-542a99e60e2f 00:12:07.075 Superblock backups stored on blocks: 00:12:07.075 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.075 00:12:07.075 Allocating group tables: 0/64 done 00:12:07.075 Writing inode tables: 0/64 done 00:12:07.639 Creating journal (8192 blocks): done 00:12:07.639 Writing superblocks and filesystem accounting information: 0/64 done 00:12:07.639 00:12:07.639 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:07.639 01:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 829989 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.190 00:12:14.190 real 0m6.854s 00:12:14.190 user 0m0.018s 00:12:14.190 sys 0m0.062s 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:14.190 ************************************ 00:12:14.190 END TEST filesystem_in_capsule_ext4 00:12:14.190 ************************************ 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.190 ************************************ 00:12:14.190 START TEST filesystem_in_capsule_btrfs 00:12:14.190 ************************************ 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:14.190 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:14.191 btrfs-progs v6.8.1 00:12:14.191 See https://btrfs.readthedocs.io for more information. 00:12:14.191 00:12:14.191 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:14.191 NOTE: several default settings have changed in version 5.15, please make sure 00:12:14.191 this does not affect your deployments: 00:12:14.191 - DUP for metadata (-m dup) 00:12:14.191 - enabled no-holes (-O no-holes) 00:12:14.191 - enabled free-space-tree (-R free-space-tree) 00:12:14.191 00:12:14.191 Label: (null) 00:12:14.191 UUID: 88def192-0803-48f6-bd8a-c5f7b585c940 00:12:14.191 Node size: 16384 00:12:14.191 Sector size: 4096 (CPU page size: 4096) 00:12:14.191 Filesystem size: 510.00MiB 00:12:14.191 Block group profiles: 00:12:14.191 Data: single 8.00MiB 00:12:14.191 Metadata: DUP 32.00MiB 00:12:14.191 System: DUP 8.00MiB 00:12:14.191 SSD detected: yes 00:12:14.191 Zoned device: no 00:12:14.191 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:14.191 Checksum: crc32c 00:12:14.191 Number of devices: 1 00:12:14.191 Devices: 00:12:14.191 ID SIZE PATH 00:12:14.191 1 510.00MiB /dev/nvme0n1p1 00:12:14.191 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 829989 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.191 00:12:14.191 real 0m0.467s 00:12:14.191 user 0m0.023s 00:12:14.191 sys 0m0.097s 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.191 ************************************ 00:12:14.191 END TEST filesystem_in_capsule_btrfs 00:12:14.191 ************************************ 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.191 01:29:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.191 ************************************ 00:12:14.191 START TEST filesystem_in_capsule_xfs 00:12:14.191 ************************************ 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:14.191 01:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:14.449 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:14.449 = sectsz=512 attr=2, projid32bit=1 00:12:14.449 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:14.449 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:14.449 data = bsize=4096 blocks=130560, imaxpct=25 00:12:14.449 = sunit=0 swidth=0 blks 00:12:14.449 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:14.449 log =internal log bsize=4096 blocks=16384, version=2 00:12:14.449 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:14.449 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:15.820 Discarding blocks...Done. 00:12:15.820 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:15.820 01:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 829989 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.715 00:12:17.715 real 0m3.492s 00:12:17.715 user 0m0.021s 00:12:17.715 sys 0m0.055s 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.715 ************************************ 00:12:17.715 END TEST filesystem_in_capsule_xfs 00:12:17.715 ************************************ 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.715 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 829989 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 829989 ']' 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 829989 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 829989 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 829989' 00:12:17.972 killing process with pid 829989 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 829989 00:12:17.972 01:29:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 829989 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:18.535 00:12:18.535 real 0m16.410s 00:12:18.535 user 1m3.412s 00:12:18.535 sys 0m2.041s 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.535 ************************************ 00:12:18.535 END TEST nvmf_filesystem_in_capsule 00:12:18.535 ************************************ 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.535 rmmod nvme_tcp 00:12:18.535 rmmod nvme_fabrics 00:12:18.535 rmmod nvme_keyring 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.535 01:29:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.443 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.443 00:12:20.443 real 0m37.547s 00:12:20.443 user 2m7.379s 00:12:20.443 sys 0m5.845s 00:12:20.443 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.443 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.443 ************************************ 00:12:20.443 END TEST nvmf_filesystem 00:12:20.443 ************************************ 00:12:20.444 01:30:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:20.444 01:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.444 01:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.444 01:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.703 ************************************ 00:12:20.703 START TEST nvmf_target_discovery 00:12:20.703 ************************************ 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:20.703 * Looking for test storage... 00:12:20.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:20.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.703 --rc genhtml_branch_coverage=1 00:12:20.703 --rc genhtml_function_coverage=1 00:12:20.703 --rc genhtml_legend=1 00:12:20.703 --rc geninfo_all_blocks=1 00:12:20.703 --rc geninfo_unexecuted_blocks=1 00:12:20.703 00:12:20.703 ' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:20.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.703 --rc genhtml_branch_coverage=1 00:12:20.703 --rc genhtml_function_coverage=1 00:12:20.703 --rc genhtml_legend=1 00:12:20.703 --rc geninfo_all_blocks=1 00:12:20.703 --rc geninfo_unexecuted_blocks=1 00:12:20.703 00:12:20.703 ' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:20.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.703 --rc genhtml_branch_coverage=1 00:12:20.703 --rc genhtml_function_coverage=1 00:12:20.703 --rc genhtml_legend=1 00:12:20.703 --rc geninfo_all_blocks=1 00:12:20.703 --rc geninfo_unexecuted_blocks=1 00:12:20.703 00:12:20.703 ' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:20.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.703 --rc genhtml_branch_coverage=1 00:12:20.703 --rc genhtml_function_coverage=1 00:12:20.703 --rc genhtml_legend=1 00:12:20.703 --rc geninfo_all_blocks=1 00:12:20.703 --rc geninfo_unexecuted_blocks=1 00:12:20.703 00:12:20.703 ' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.703 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.704 01:30:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:23.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:23.234 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:23.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:23.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:23.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:23.235 00:12:23.235 --- 10.0.0.2 ping statistics --- 00:12:23.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.235 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:23.235 00:12:23.235 --- 10.0.0.1 ping statistics --- 00:12:23.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.235 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=834145 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 834145 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 834145 ']' 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.235 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.235 [2024-10-01 01:30:02.687348] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:23.235 [2024-10-01 01:30:02.687443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.235 [2024-10-01 01:30:02.760562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.235 [2024-10-01 01:30:02.858211] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.236 [2024-10-01 01:30:02.858267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.236 [2024-10-01 01:30:02.858294] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.236 [2024-10-01 01:30:02.858308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.236 [2024-10-01 01:30:02.858319] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.236 [2024-10-01 01:30:02.858382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.236 [2024-10-01 01:30:02.858415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.236 [2024-10-01 01:30:02.858475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.236 [2024-10-01 01:30:02.858477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.236 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:23.236 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:23.236 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:23.236 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:23.236 01:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 [2024-10-01 01:30:03.017010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 Null1 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 [2024-10-01 01:30:03.057393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 Null2 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.236 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 Null3 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.495 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 Null4 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:23.496 00:12:23.496 Discovery Log Number of Records 6, Generation counter 6 00:12:23.496 =====Discovery Log Entry 0====== 00:12:23.496 trtype: tcp 00:12:23.496 adrfam: ipv4 00:12:23.496 subtype: current discovery subsystem 00:12:23.496 treq: not required 00:12:23.496 portid: 0 00:12:23.496 trsvcid: 4420 00:12:23.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:23.496 traddr: 10.0.0.2 00:12:23.496 eflags: explicit discovery connections, duplicate discovery information 00:12:23.496 sectype: none 00:12:23.496 =====Discovery Log Entry 1====== 00:12:23.496 trtype: tcp 00:12:23.496 adrfam: ipv4 00:12:23.496 subtype: nvme subsystem 00:12:23.496 treq: not required 00:12:23.496 portid: 0 00:12:23.496 trsvcid: 4420 00:12:23.496 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:23.496 traddr: 10.0.0.2 00:12:23.496 eflags: none 00:12:23.496 sectype: none 00:12:23.496 =====Discovery Log Entry 2====== 00:12:23.496 trtype: tcp 00:12:23.496 adrfam: ipv4 00:12:23.496 subtype: nvme subsystem 00:12:23.496 treq: not required 00:12:23.496 portid: 0 00:12:23.496 trsvcid: 4420 00:12:23.496 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:23.496 traddr: 10.0.0.2 00:12:23.496 eflags: none 00:12:23.496 sectype: none 00:12:23.496 =====Discovery Log Entry 3====== 00:12:23.496 trtype: tcp 00:12:23.496 adrfam: ipv4 00:12:23.496 subtype: nvme subsystem 00:12:23.496 treq: not required 00:12:23.496 portid: 0 00:12:23.496 trsvcid: 4420 00:12:23.496 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:23.496 traddr: 10.0.0.2 00:12:23.496 eflags: none 00:12:23.496 sectype: none 00:12:23.496 =====Discovery Log Entry 4====== 00:12:23.496 trtype: tcp 00:12:23.496 adrfam: ipv4 00:12:23.496 subtype: nvme subsystem 00:12:23.496 treq: not required 00:12:23.496 portid: 0 00:12:23.496 trsvcid: 4420 00:12:23.496 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:23.496 traddr: 10.0.0.2 00:12:23.496 eflags: none 00:12:23.496 sectype: none 00:12:23.496 =====Discovery Log Entry 5====== 00:12:23.496 trtype: tcp 00:12:23.496 adrfam: ipv4 00:12:23.496 subtype: discovery subsystem referral 00:12:23.496 treq: not required 00:12:23.496 portid: 0 00:12:23.496 trsvcid: 4430 00:12:23.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:23.496 traddr: 10.0.0.2 00:12:23.496 eflags: none 00:12:23.496 sectype: none 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:23.496 Perform nvmf subsystem discovery via RPC 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.496 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 [ 00:12:23.496 { 00:12:23.496 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.496 "subtype": "Discovery", 00:12:23.496 "listen_addresses": [ 00:12:23.496 { 00:12:23.496 "trtype": "TCP", 00:12:23.496 "adrfam": "IPv4", 00:12:23.496 "traddr": "10.0.0.2", 00:12:23.496 "trsvcid": "4420" 00:12:23.496 } 00:12:23.496 ], 00:12:23.496 "allow_any_host": true, 00:12:23.496 "hosts": [] 00:12:23.496 }, 00:12:23.496 { 00:12:23.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.496 "subtype": "NVMe", 00:12:23.496 "listen_addresses": [ 00:12:23.496 { 00:12:23.496 "trtype": "TCP", 00:12:23.496 "adrfam": "IPv4", 00:12:23.496 "traddr": "10.0.0.2", 00:12:23.496 "trsvcid": "4420" 00:12:23.496 } 00:12:23.496 ], 00:12:23.496 "allow_any_host": true, 00:12:23.496 "hosts": [], 00:12:23.496 "serial_number": "SPDK00000000000001", 00:12:23.496 "model_number": "SPDK bdev Controller", 00:12:23.496 "max_namespaces": 32, 00:12:23.496 "min_cntlid": 1, 00:12:23.496 "max_cntlid": 65519, 00:12:23.496 "namespaces": [ 00:12:23.496 { 00:12:23.496 "nsid": 1, 00:12:23.496 "bdev_name": "Null1", 00:12:23.496 "name": "Null1", 00:12:23.496 "nguid": "A97FCB65AE08425F948E3D285E32F1D2", 00:12:23.496 "uuid": "a97fcb65-ae08-425f-948e-3d285e32f1d2" 00:12:23.496 } 00:12:23.496 ] 00:12:23.496 }, 00:12:23.496 { 00:12:23.496 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:23.496 "subtype": "NVMe", 00:12:23.496 "listen_addresses": [ 00:12:23.496 { 00:12:23.496 "trtype": "TCP", 00:12:23.496 "adrfam": "IPv4", 00:12:23.496 "traddr": "10.0.0.2", 00:12:23.496 "trsvcid": "4420" 00:12:23.496 } 00:12:23.496 ], 00:12:23.496 "allow_any_host": true, 00:12:23.496 "hosts": [], 00:12:23.496 "serial_number": "SPDK00000000000002", 00:12:23.496 "model_number": "SPDK bdev Controller", 00:12:23.496 "max_namespaces": 32, 00:12:23.496 "min_cntlid": 1, 00:12:23.496 "max_cntlid": 65519, 00:12:23.496 "namespaces": [ 00:12:23.496 { 00:12:23.496 "nsid": 1, 00:12:23.496 "bdev_name": "Null2", 00:12:23.496 "name": "Null2", 00:12:23.496 "nguid": "C3E40FC8FCA24707B81C604107639033", 00:12:23.496 "uuid": "c3e40fc8-fca2-4707-b81c-604107639033" 00:12:23.496 } 00:12:23.496 ] 00:12:23.496 }, 00:12:23.496 { 00:12:23.496 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:23.496 "subtype": "NVMe", 00:12:23.496 "listen_addresses": [ 00:12:23.496 { 00:12:23.496 "trtype": "TCP", 00:12:23.496 "adrfam": "IPv4", 00:12:23.496 "traddr": "10.0.0.2", 00:12:23.496 "trsvcid": "4420" 00:12:23.496 } 00:12:23.496 ], 00:12:23.497 "allow_any_host": true, 00:12:23.497 "hosts": [], 00:12:23.497 "serial_number": "SPDK00000000000003", 00:12:23.497 "model_number": "SPDK bdev Controller", 00:12:23.497 "max_namespaces": 32, 00:12:23.497 "min_cntlid": 1, 00:12:23.497 "max_cntlid": 65519, 00:12:23.497 "namespaces": [ 00:12:23.497 { 00:12:23.497 "nsid": 1, 00:12:23.497 "bdev_name": "Null3", 00:12:23.497 "name": "Null3", 00:12:23.497 "nguid": "D809B86385B54935B6F1E9F85C81F274", 00:12:23.497 "uuid": "d809b863-85b5-4935-b6f1-e9f85c81f274" 00:12:23.497 } 00:12:23.497 ] 00:12:23.497 }, 00:12:23.497 { 00:12:23.497 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:23.497 "subtype": "NVMe", 00:12:23.497 "listen_addresses": [ 00:12:23.497 { 00:12:23.497 "trtype": "TCP", 00:12:23.497 "adrfam": "IPv4", 00:12:23.497 "traddr": "10.0.0.2", 00:12:23.497 "trsvcid": "4420" 00:12:23.497 } 00:12:23.497 ], 00:12:23.497 "allow_any_host": true, 00:12:23.497 "hosts": [], 00:12:23.497 "serial_number": "SPDK00000000000004", 00:12:23.497 "model_number": "SPDK bdev Controller", 00:12:23.497 "max_namespaces": 32, 00:12:23.497 "min_cntlid": 1, 00:12:23.497 "max_cntlid": 65519, 00:12:23.497 "namespaces": [ 00:12:23.497 { 00:12:23.497 "nsid": 1, 00:12:23.497 "bdev_name": "Null4", 00:12:23.497 "name": "Null4", 00:12:23.497 "nguid": "8CEE09A4566243E3A626C4D8254AD436", 00:12:23.497 "uuid": "8cee09a4-5662-43e3-a626-c4d8254ad436" 00:12:23.497 } 00:12:23.497 ] 00:12:23.497 } 00:12:23.497 ] 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.497 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.756 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.756 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.756 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:23.756 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.757 rmmod nvme_tcp 00:12:23.757 rmmod nvme_fabrics 00:12:23.757 rmmod nvme_keyring 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 834145 ']' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 834145 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 834145 ']' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 834145 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834145 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834145' 00:12:23.757 killing process with pid 834145 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 834145 00:12:23.757 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 834145 00:12:24.016 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:24.016 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:24.016 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:24.016 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:24.016 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:24.016 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:24.017 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:24.017 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.017 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.017 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.017 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.017 01:30:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.553 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.553 00:12:26.553 real 0m5.511s 00:12:26.553 user 0m4.496s 00:12:26.553 sys 0m1.861s 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:26.554 ************************************ 00:12:26.554 END TEST nvmf_target_discovery 00:12:26.554 ************************************ 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.554 ************************************ 00:12:26.554 START TEST nvmf_referrals 00:12:26.554 ************************************ 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:26.554 * Looking for test storage... 00:12:26.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:26.554 01:30:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:26.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.554 --rc genhtml_branch_coverage=1 00:12:26.554 --rc genhtml_function_coverage=1 00:12:26.554 --rc genhtml_legend=1 00:12:26.554 --rc geninfo_all_blocks=1 00:12:26.554 --rc geninfo_unexecuted_blocks=1 00:12:26.554 00:12:26.554 ' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:26.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.554 --rc genhtml_branch_coverage=1 00:12:26.554 --rc genhtml_function_coverage=1 00:12:26.554 --rc genhtml_legend=1 00:12:26.554 --rc geninfo_all_blocks=1 00:12:26.554 --rc geninfo_unexecuted_blocks=1 00:12:26.554 00:12:26.554 ' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:26.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.554 --rc genhtml_branch_coverage=1 00:12:26.554 --rc genhtml_function_coverage=1 00:12:26.554 --rc genhtml_legend=1 00:12:26.554 --rc geninfo_all_blocks=1 00:12:26.554 --rc geninfo_unexecuted_blocks=1 00:12:26.554 00:12:26.554 ' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:26.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.554 --rc genhtml_branch_coverage=1 00:12:26.554 --rc genhtml_function_coverage=1 00:12:26.554 --rc genhtml_legend=1 00:12:26.554 --rc geninfo_all_blocks=1 00:12:26.554 --rc geninfo_unexecuted_blocks=1 00:12:26.554 00:12:26.554 ' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.554 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.555 01:30:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.457 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:12:28.458 00:12:28.458 --- 10.0.0.2 ping statistics --- 00:12:28.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.458 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:12:28.458 00:12:28.458 --- 10.0.0.1 ping statistics --- 00:12:28.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.458 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=836595 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 836595 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 836595 ']' 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.458 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.458 [2024-10-01 01:30:08.276871] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:28.458 [2024-10-01 01:30:08.276963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.717 [2024-10-01 01:30:08.354438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.717 [2024-10-01 01:30:08.450713] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.717 [2024-10-01 01:30:08.450768] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.717 [2024-10-01 01:30:08.450793] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.717 [2024-10-01 01:30:08.450807] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.717 [2024-10-01 01:30:08.450820] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.717 [2024-10-01 01:30:08.450889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.717 [2024-10-01 01:30:08.450970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.717 [2024-10-01 01:30:08.451069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.717 [2024-10-01 01:30:08.451074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 [2024-10-01 01:30:08.608038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 [2024-10-01 01:30:08.620302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.976 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.977 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.235 01:30:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.235 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.493 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.753 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.011 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.271 01:30:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.271 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.529 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.788 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.788 rmmod nvme_tcp 00:12:31.048 rmmod nvme_fabrics 00:12:31.048 rmmod nvme_keyring 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 836595 ']' 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 836595 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 836595 ']' 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 836595 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 836595 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 836595' 00:12:31.048 killing process with pid 836595 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 836595 00:12:31.048 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 836595 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.309 01:30:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.215 00:12:33.215 real 0m7.134s 00:12:33.215 user 0m11.487s 00:12:33.215 sys 0m2.250s 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.215 ************************************ 00:12:33.215 END TEST nvmf_referrals 00:12:33.215 ************************************ 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.215 ************************************ 00:12:33.215 START TEST nvmf_connect_disconnect 00:12:33.215 ************************************ 00:12:33.215 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:33.473 * Looking for test storage... 00:12:33.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:33.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.474 --rc genhtml_branch_coverage=1 00:12:33.474 --rc genhtml_function_coverage=1 00:12:33.474 --rc genhtml_legend=1 00:12:33.474 --rc geninfo_all_blocks=1 00:12:33.474 --rc geninfo_unexecuted_blocks=1 00:12:33.474 00:12:33.474 ' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:33.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.474 --rc genhtml_branch_coverage=1 00:12:33.474 --rc genhtml_function_coverage=1 00:12:33.474 --rc genhtml_legend=1 00:12:33.474 --rc geninfo_all_blocks=1 00:12:33.474 --rc geninfo_unexecuted_blocks=1 00:12:33.474 00:12:33.474 ' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:33.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.474 --rc genhtml_branch_coverage=1 00:12:33.474 --rc genhtml_function_coverage=1 00:12:33.474 --rc genhtml_legend=1 00:12:33.474 --rc geninfo_all_blocks=1 00:12:33.474 --rc geninfo_unexecuted_blocks=1 00:12:33.474 00:12:33.474 ' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:33.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.474 --rc genhtml_branch_coverage=1 00:12:33.474 --rc genhtml_function_coverage=1 00:12:33.474 --rc genhtml_legend=1 00:12:33.474 --rc geninfo_all_blocks=1 00:12:33.474 --rc geninfo_unexecuted_blocks=1 00:12:33.474 00:12:33.474 ' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:33.474 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.475 01:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:36.009 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:12:36.010 00:12:36.010 --- 10.0.0.2 ping statistics --- 00:12:36.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.010 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:12:36.010 00:12:36.010 --- 10.0.0.1 ping statistics --- 00:12:36.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.010 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=839165 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 839165 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 839165 ']' 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.010 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.010 [2024-10-01 01:30:15.608228] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:36.010 [2024-10-01 01:30:15.608313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.010 [2024-10-01 01:30:15.680079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.010 [2024-10-01 01:30:15.775725] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.010 [2024-10-01 01:30:15.775790] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.010 [2024-10-01 01:30:15.775817] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.010 [2024-10-01 01:30:15.775831] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.010 [2024-10-01 01:30:15.775844] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.011 [2024-10-01 01:30:15.775932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.011 [2024-10-01 01:30:15.775990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.011 [2024-10-01 01:30:15.776046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.011 [2024-10-01 01:30:15.776050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 [2024-10-01 01:30:15.946477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.271 01:30:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 [2024-10-01 01:30:16.003850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.271 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.271 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:36.271 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:36.271 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:36.271 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:38.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.166 rmmod nvme_tcp 00:16:29.166 rmmod nvme_fabrics 00:16:29.166 rmmod nvme_keyring 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 839165 ']' 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 839165 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 839165 ']' 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 839165 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 839165 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 839165' 00:16:29.166 killing process with pid 839165 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 839165 00:16:29.166 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 839165 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:29.425 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:29.426 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.426 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:29.426 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.426 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.426 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:31.331 00:16:31.331 real 3m58.077s 00:16:31.331 user 15m6.934s 00:16:31.331 sys 0m35.126s 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.331 ************************************ 00:16:31.331 END TEST nvmf_connect_disconnect 00:16:31.331 ************************************ 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.331 01:34:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.591 ************************************ 00:16:31.591 START TEST nvmf_multitarget 00:16:31.591 ************************************ 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:31.591 * Looking for test storage... 00:16:31.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.591 --rc genhtml_branch_coverage=1 00:16:31.591 --rc genhtml_function_coverage=1 00:16:31.591 --rc genhtml_legend=1 00:16:31.591 --rc geninfo_all_blocks=1 00:16:31.591 --rc geninfo_unexecuted_blocks=1 00:16:31.591 00:16:31.591 ' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.591 --rc genhtml_branch_coverage=1 00:16:31.591 --rc genhtml_function_coverage=1 00:16:31.591 --rc genhtml_legend=1 00:16:31.591 --rc geninfo_all_blocks=1 00:16:31.591 --rc geninfo_unexecuted_blocks=1 00:16:31.591 00:16:31.591 ' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.591 --rc genhtml_branch_coverage=1 00:16:31.591 --rc genhtml_function_coverage=1 00:16:31.591 --rc genhtml_legend=1 00:16:31.591 --rc geninfo_all_blocks=1 00:16:31.591 --rc geninfo_unexecuted_blocks=1 00:16:31.591 00:16:31.591 ' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.591 --rc genhtml_branch_coverage=1 00:16:31.591 --rc genhtml_function_coverage=1 00:16:31.591 --rc genhtml_legend=1 00:16:31.591 --rc geninfo_all_blocks=1 00:16:31.591 --rc geninfo_unexecuted_blocks=1 00:16:31.591 00:16:31.591 ' 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.591 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:31.592 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:34.124 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:34.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:34.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:34.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:34.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:16:34.125 00:16:34.125 --- 10.0.0.2 ping statistics --- 00:16:34.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.125 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:16:34.125 00:16:34.125 --- 10.0.0.1 ping statistics --- 00:16:34.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.125 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=870381 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 870381 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 870381 ']' 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.125 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.125 [2024-10-01 01:34:13.665354] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:34.125 [2024-10-01 01:34:13.665437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.125 [2024-10-01 01:34:13.730538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.125 [2024-10-01 01:34:13.821838] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.125 [2024-10-01 01:34:13.821910] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.125 [2024-10-01 01:34:13.821939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.125 [2024-10-01 01:34:13.821952] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.126 [2024-10-01 01:34:13.821962] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.126 [2024-10-01 01:34:13.822094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.126 [2024-10-01 01:34:13.822120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.126 [2024-10-01 01:34:13.822182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.126 [2024-10-01 01:34:13.822184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.126 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.126 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:34.126 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:34.126 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.126 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.414 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.414 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:34.414 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.414 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:34.414 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:34.414 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:34.414 "nvmf_tgt_1" 00:16:34.414 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:34.697 "nvmf_tgt_2" 00:16:34.697 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.697 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:34.697 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:34.697 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:34.955 true 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:34.956 true 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.956 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.213 rmmod nvme_tcp 00:16:35.213 rmmod nvme_fabrics 00:16:35.213 rmmod nvme_keyring 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 870381 ']' 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 870381 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 870381 ']' 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 870381 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870381 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870381' 00:16:35.213 killing process with pid 870381 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 870381 00:16:35.213 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 870381 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.472 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.377 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.377 00:16:37.377 real 0m5.999s 00:16:37.377 user 0m6.868s 00:16:37.377 sys 0m2.049s 00:16:37.377 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.377 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.378 ************************************ 00:16:37.378 END TEST nvmf_multitarget 00:16:37.378 ************************************ 00:16:37.378 01:34:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:37.378 01:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:37.378 01:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:37.378 01:34:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.636 ************************************ 00:16:37.636 START TEST nvmf_rpc 00:16:37.636 ************************************ 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:37.636 * Looking for test storage... 00:16:37.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.636 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.637 --rc genhtml_branch_coverage=1 00:16:37.637 --rc genhtml_function_coverage=1 00:16:37.637 --rc genhtml_legend=1 00:16:37.637 --rc geninfo_all_blocks=1 00:16:37.637 --rc geninfo_unexecuted_blocks=1 00:16:37.637 00:16:37.637 ' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.637 --rc genhtml_branch_coverage=1 00:16:37.637 --rc genhtml_function_coverage=1 00:16:37.637 --rc genhtml_legend=1 00:16:37.637 --rc geninfo_all_blocks=1 00:16:37.637 --rc geninfo_unexecuted_blocks=1 00:16:37.637 00:16:37.637 ' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.637 --rc genhtml_branch_coverage=1 00:16:37.637 --rc genhtml_function_coverage=1 00:16:37.637 --rc genhtml_legend=1 00:16:37.637 --rc geninfo_all_blocks=1 00:16:37.637 --rc geninfo_unexecuted_blocks=1 00:16:37.637 00:16:37.637 ' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.637 --rc genhtml_branch_coverage=1 00:16:37.637 --rc genhtml_function_coverage=1 00:16:37.637 --rc genhtml_legend=1 00:16:37.637 --rc geninfo_all_blocks=1 00:16:37.637 --rc geninfo_unexecuted_blocks=1 00:16:37.637 00:16:37.637 ' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.637 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.638 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:37.638 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:37.638 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.638 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:39.536 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:39.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:39.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:39.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:39.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.537 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:39.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:16:39.795 00:16:39.795 --- 10.0.0.2 ping statistics --- 00:16:39.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.795 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:16:39.795 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:16:39.795 00:16:39.795 --- 10.0.0.1 ping statistics --- 00:16:39.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.795 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=872514 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 872514 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 872514 ']' 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.796 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.796 [2024-10-01 01:34:19.552844] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:39.796 [2024-10-01 01:34:19.552931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.796 [2024-10-01 01:34:19.616599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.054 [2024-10-01 01:34:19.700527] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.054 [2024-10-01 01:34:19.700583] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.054 [2024-10-01 01:34:19.700612] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.054 [2024-10-01 01:34:19.700624] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.054 [2024-10-01 01:34:19.700634] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.054 [2024-10-01 01:34:19.700685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.054 [2024-10-01 01:34:19.700744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.054 [2024-10-01 01:34:19.700813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.054 [2024-10-01 01:34:19.700815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:40.054 "tick_rate": 2700000000, 00:16:40.054 "poll_groups": [ 00:16:40.054 { 00:16:40.054 "name": "nvmf_tgt_poll_group_000", 00:16:40.054 "admin_qpairs": 0, 00:16:40.054 "io_qpairs": 0, 00:16:40.054 "current_admin_qpairs": 0, 00:16:40.054 "current_io_qpairs": 0, 00:16:40.054 "pending_bdev_io": 0, 00:16:40.054 "completed_nvme_io": 0, 00:16:40.054 "transports": [] 00:16:40.054 }, 00:16:40.054 { 00:16:40.054 "name": "nvmf_tgt_poll_group_001", 00:16:40.054 "admin_qpairs": 0, 00:16:40.054 "io_qpairs": 0, 00:16:40.054 "current_admin_qpairs": 0, 00:16:40.054 "current_io_qpairs": 0, 00:16:40.054 "pending_bdev_io": 0, 00:16:40.054 "completed_nvme_io": 0, 00:16:40.054 "transports": [] 00:16:40.054 }, 00:16:40.054 { 00:16:40.054 "name": "nvmf_tgt_poll_group_002", 00:16:40.054 "admin_qpairs": 0, 00:16:40.054 "io_qpairs": 0, 00:16:40.054 "current_admin_qpairs": 0, 00:16:40.054 "current_io_qpairs": 0, 00:16:40.054 "pending_bdev_io": 0, 00:16:40.054 "completed_nvme_io": 0, 00:16:40.054 "transports": [] 00:16:40.054 }, 00:16:40.054 { 00:16:40.054 "name": "nvmf_tgt_poll_group_003", 00:16:40.054 "admin_qpairs": 0, 00:16:40.054 "io_qpairs": 0, 00:16:40.054 "current_admin_qpairs": 0, 00:16:40.054 "current_io_qpairs": 0, 00:16:40.054 "pending_bdev_io": 0, 00:16:40.054 "completed_nvme_io": 0, 00:16:40.054 "transports": [] 00:16:40.054 } 00:16:40.054 ] 00:16:40.054 }' 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:40.054 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.312 [2024-10-01 01:34:19.956665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:40.312 "tick_rate": 2700000000, 00:16:40.312 "poll_groups": [ 00:16:40.312 { 00:16:40.312 "name": "nvmf_tgt_poll_group_000", 00:16:40.312 "admin_qpairs": 0, 00:16:40.312 "io_qpairs": 0, 00:16:40.312 "current_admin_qpairs": 0, 00:16:40.312 "current_io_qpairs": 0, 00:16:40.312 "pending_bdev_io": 0, 00:16:40.312 "completed_nvme_io": 0, 00:16:40.312 "transports": [ 00:16:40.312 { 00:16:40.312 "trtype": "TCP" 00:16:40.312 } 00:16:40.312 ] 00:16:40.312 }, 00:16:40.312 { 00:16:40.312 "name": "nvmf_tgt_poll_group_001", 00:16:40.312 "admin_qpairs": 0, 00:16:40.312 "io_qpairs": 0, 00:16:40.312 "current_admin_qpairs": 0, 00:16:40.312 "current_io_qpairs": 0, 00:16:40.312 "pending_bdev_io": 0, 00:16:40.312 "completed_nvme_io": 0, 00:16:40.312 "transports": [ 00:16:40.312 { 00:16:40.312 "trtype": "TCP" 00:16:40.312 } 00:16:40.312 ] 00:16:40.312 }, 00:16:40.312 { 00:16:40.312 "name": "nvmf_tgt_poll_group_002", 00:16:40.312 "admin_qpairs": 0, 00:16:40.312 "io_qpairs": 0, 00:16:40.312 "current_admin_qpairs": 0, 00:16:40.312 "current_io_qpairs": 0, 00:16:40.312 "pending_bdev_io": 0, 00:16:40.312 "completed_nvme_io": 0, 00:16:40.312 "transports": [ 00:16:40.312 { 00:16:40.312 "trtype": "TCP" 00:16:40.312 } 00:16:40.312 ] 00:16:40.312 }, 00:16:40.312 { 00:16:40.312 "name": "nvmf_tgt_poll_group_003", 00:16:40.312 "admin_qpairs": 0, 00:16:40.312 "io_qpairs": 0, 00:16:40.312 "current_admin_qpairs": 0, 00:16:40.312 "current_io_qpairs": 0, 00:16:40.312 "pending_bdev_io": 0, 00:16:40.312 "completed_nvme_io": 0, 00:16:40.312 "transports": [ 00:16:40.312 { 00:16:40.312 "trtype": "TCP" 00:16:40.312 } 00:16:40.312 ] 00:16:40.312 } 00:16:40.312 ] 00:16:40.312 }' 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:40.312 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:40.313 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:40.313 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.313 Malloc1 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.313 [2024-10-01 01:34:20.119131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:40.313 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:40.313 [2024-10-01 01:34:20.151788] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:40.570 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:40.570 could not add new controller: failed to write to nvme-fabrics device 00:16:40.570 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:40.570 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.571 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.135 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.135 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.135 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.135 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:41.135 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:43.032 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:43.032 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:43.032 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.290 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.290 [2024-10-01 01:34:23.045267] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:43.290 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:43.290 could not add new controller: failed to write to nvme-fabrics device 00:16:43.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.291 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.857 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.857 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:43.857 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.857 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:43.857 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.386 [2024-10-01 01:34:25.847418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.386 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.952 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.952 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:46.952 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.952 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:46.952 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:48.851 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:48.851 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:48.851 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.851 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:48.851 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.852 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 [2024-10-01 01:34:28.728530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.110 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.676 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.676 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:49.676 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.676 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:49.676 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:51.574 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.832 [2024-10-01 01:34:31.520527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.832 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.397 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.397 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:52.397 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.397 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:52.397 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.925 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 [2024-10-01 01:34:34.376030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.926 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.184 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.184 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.184 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.184 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:55.184 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.712 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.713 [2024-10-01 01:34:37.168037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.713 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.970 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.970 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:57.970 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.970 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:57.970 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 [2024-10-01 01:34:39.913429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.505 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 [2024-10-01 01:34:39.961481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 [2024-10-01 01:34:40.009671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 [2024-10-01 01:34:40.057896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 [2024-10-01 01:34:40.105968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.506 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:00.507 "tick_rate": 2700000000, 00:17:00.507 "poll_groups": [ 00:17:00.507 { 00:17:00.507 "name": "nvmf_tgt_poll_group_000", 00:17:00.507 "admin_qpairs": 2, 00:17:00.507 "io_qpairs": 84, 00:17:00.507 "current_admin_qpairs": 0, 00:17:00.507 "current_io_qpairs": 0, 00:17:00.507 "pending_bdev_io": 0, 00:17:00.507 "completed_nvme_io": 184, 00:17:00.507 "transports": [ 00:17:00.507 { 00:17:00.507 "trtype": "TCP" 00:17:00.507 } 00:17:00.507 ] 00:17:00.507 }, 00:17:00.507 { 00:17:00.507 "name": "nvmf_tgt_poll_group_001", 00:17:00.507 "admin_qpairs": 2, 00:17:00.507 "io_qpairs": 84, 00:17:00.507 "current_admin_qpairs": 0, 00:17:00.507 "current_io_qpairs": 0, 00:17:00.507 "pending_bdev_io": 0, 00:17:00.507 "completed_nvme_io": 84, 00:17:00.507 "transports": [ 00:17:00.507 { 00:17:00.507 "trtype": "TCP" 00:17:00.507 } 00:17:00.507 ] 00:17:00.507 }, 00:17:00.507 { 00:17:00.507 "name": "nvmf_tgt_poll_group_002", 00:17:00.507 "admin_qpairs": 1, 00:17:00.507 "io_qpairs": 84, 00:17:00.507 "current_admin_qpairs": 0, 00:17:00.507 "current_io_qpairs": 0, 00:17:00.507 "pending_bdev_io": 0, 00:17:00.507 "completed_nvme_io": 234, 00:17:00.507 "transports": [ 00:17:00.507 { 00:17:00.507 "trtype": "TCP" 00:17:00.507 } 00:17:00.507 ] 00:17:00.507 }, 00:17:00.507 { 00:17:00.507 "name": "nvmf_tgt_poll_group_003", 00:17:00.507 "admin_qpairs": 2, 00:17:00.507 "io_qpairs": 84, 00:17:00.507 "current_admin_qpairs": 0, 00:17:00.507 "current_io_qpairs": 0, 00:17:00.507 "pending_bdev_io": 0, 00:17:00.507 "completed_nvme_io": 184, 00:17:00.507 "transports": [ 00:17:00.507 { 00:17:00.507 "trtype": "TCP" 00:17:00.507 } 00:17:00.507 ] 00:17:00.507 } 00:17:00.507 ] 00:17:00.507 }' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.507 rmmod nvme_tcp 00:17:00.507 rmmod nvme_fabrics 00:17:00.507 rmmod nvme_keyring 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 872514 ']' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 872514 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 872514 ']' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 872514 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 872514 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 872514' 00:17:00.507 killing process with pid 872514 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 872514 00:17:00.507 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 872514 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.766 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.300 00:17:03.300 real 0m25.410s 00:17:03.300 user 1m22.747s 00:17:03.300 sys 0m4.183s 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.300 ************************************ 00:17:03.300 END TEST nvmf_rpc 00:17:03.300 ************************************ 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.300 ************************************ 00:17:03.300 START TEST nvmf_invalid 00:17:03.300 ************************************ 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:03.300 * Looking for test storage... 00:17:03.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.300 --rc genhtml_branch_coverage=1 00:17:03.300 --rc genhtml_function_coverage=1 00:17:03.300 --rc genhtml_legend=1 00:17:03.300 --rc geninfo_all_blocks=1 00:17:03.300 --rc geninfo_unexecuted_blocks=1 00:17:03.300 00:17:03.300 ' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.300 --rc genhtml_branch_coverage=1 00:17:03.300 --rc genhtml_function_coverage=1 00:17:03.300 --rc genhtml_legend=1 00:17:03.300 --rc geninfo_all_blocks=1 00:17:03.300 --rc geninfo_unexecuted_blocks=1 00:17:03.300 00:17:03.300 ' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.300 --rc genhtml_branch_coverage=1 00:17:03.300 --rc genhtml_function_coverage=1 00:17:03.300 --rc genhtml_legend=1 00:17:03.300 --rc geninfo_all_blocks=1 00:17:03.300 --rc geninfo_unexecuted_blocks=1 00:17:03.300 00:17:03.300 ' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:03.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.300 --rc genhtml_branch_coverage=1 00:17:03.300 --rc genhtml_function_coverage=1 00:17:03.300 --rc genhtml_legend=1 00:17:03.300 --rc geninfo_all_blocks=1 00:17:03.300 --rc geninfo_unexecuted_blocks=1 00:17:03.300 00:17:03.300 ' 00:17:03.300 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.301 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.204 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.204 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.204 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.204 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.204 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.204 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:05.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:05.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:05.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:05.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.204 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.205 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:17:05.462 00:17:05.462 --- 10.0.0.2 ping statistics --- 00:17:05.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.462 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:17:05.462 00:17:05.462 --- 10.0.0.1 ping statistics --- 00:17:05.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.462 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=877019 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 877019 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 877019 ']' 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.462 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.462 [2024-10-01 01:34:45.222992] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:05.462 [2024-10-01 01:34:45.223076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.462 [2024-10-01 01:34:45.286141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.720 [2024-10-01 01:34:45.369339] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.720 [2024-10-01 01:34:45.369393] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.720 [2024-10-01 01:34:45.369422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.720 [2024-10-01 01:34:45.369436] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.720 [2024-10-01 01:34:45.369446] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.720 [2024-10-01 01:34:45.369513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.720 [2024-10-01 01:34:45.369601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.720 [2024-10-01 01:34:45.369665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.720 [2024-10-01 01:34:45.369668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:05.720 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31281 00:17:05.977 [2024-10-01 01:34:45.786818] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:05.977 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:05.977 { 00:17:05.977 "nqn": "nqn.2016-06.io.spdk:cnode31281", 00:17:05.977 "tgt_name": "foobar", 00:17:05.977 "method": "nvmf_create_subsystem", 00:17:05.977 "req_id": 1 00:17:05.977 } 00:17:05.977 Got JSON-RPC error response 00:17:05.977 response: 00:17:05.977 { 00:17:05.977 "code": -32603, 00:17:05.977 "message": "Unable to find target foobar" 00:17:05.977 }' 00:17:05.977 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:05.977 { 00:17:05.977 "nqn": "nqn.2016-06.io.spdk:cnode31281", 00:17:05.977 "tgt_name": "foobar", 00:17:05.978 "method": "nvmf_create_subsystem", 00:17:05.978 "req_id": 1 00:17:05.978 } 00:17:05.978 Got JSON-RPC error response 00:17:05.978 response: 00:17:05.978 { 00:17:05.978 "code": -32603, 00:17:05.978 "message": "Unable to find target foobar" 00:17:05.978 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:05.978 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:05.978 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24072 00:17:06.235 [2024-10-01 01:34:46.067754] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24072: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:06.493 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:06.493 { 00:17:06.493 "nqn": "nqn.2016-06.io.spdk:cnode24072", 00:17:06.493 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:06.493 "method": "nvmf_create_subsystem", 00:17:06.493 "req_id": 1 00:17:06.493 } 00:17:06.493 Got JSON-RPC error response 00:17:06.493 response: 00:17:06.493 { 00:17:06.493 "code": -32602, 00:17:06.493 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:06.493 }' 00:17:06.493 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:06.493 { 00:17:06.493 "nqn": "nqn.2016-06.io.spdk:cnode24072", 00:17:06.493 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:06.493 "method": "nvmf_create_subsystem", 00:17:06.493 "req_id": 1 00:17:06.493 } 00:17:06.493 Got JSON-RPC error response 00:17:06.493 response: 00:17:06.493 { 00:17:06.493 "code": -32602, 00:17:06.493 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:06.493 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:06.493 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:06.493 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18755 00:17:06.493 [2024-10-01 01:34:46.332646] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18755: invalid model number 'SPDK_Controller' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:06.751 { 00:17:06.751 "nqn": "nqn.2016-06.io.spdk:cnode18755", 00:17:06.751 "model_number": "SPDK_Controller\u001f", 00:17:06.751 "method": "nvmf_create_subsystem", 00:17:06.751 "req_id": 1 00:17:06.751 } 00:17:06.751 Got JSON-RPC error response 00:17:06.751 response: 00:17:06.751 { 00:17:06.751 "code": -32602, 00:17:06.751 "message": "Invalid MN SPDK_Controller\u001f" 00:17:06.751 }' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:06.751 { 00:17:06.751 "nqn": "nqn.2016-06.io.spdk:cnode18755", 00:17:06.751 "model_number": "SPDK_Controller\u001f", 00:17:06.751 "method": "nvmf_create_subsystem", 00:17:06.751 "req_id": 1 00:17:06.751 } 00:17:06.751 Got JSON-RPC error response 00:17:06.751 response: 00:17:06.751 { 00:17:06.751 "code": -32602, 00:17:06.751 "message": "Invalid MN SPDK_Controller\u001f" 00:17:06.751 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.751 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '/D5$L!sv"zow8EsF"E8I^' 00:17:06.752 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/D5$L!sv"zow8EsF"E8I^' nqn.2016-06.io.spdk:cnode11912 00:17:07.011 [2024-10-01 01:34:46.689870] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11912: invalid serial number '/D5$L!sv"zow8EsF"E8I^' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:07.011 { 00:17:07.011 "nqn": "nqn.2016-06.io.spdk:cnode11912", 00:17:07.011 "serial_number": "/D5$L!sv\"zow8EsF\"E8I^", 00:17:07.011 "method": "nvmf_create_subsystem", 00:17:07.011 "req_id": 1 00:17:07.011 } 00:17:07.011 Got JSON-RPC error response 00:17:07.011 response: 00:17:07.011 { 00:17:07.011 "code": -32602, 00:17:07.011 "message": "Invalid SN /D5$L!sv\"zow8EsF\"E8I^" 00:17:07.011 }' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:07.011 { 00:17:07.011 "nqn": "nqn.2016-06.io.spdk:cnode11912", 00:17:07.011 "serial_number": "/D5$L!sv\"zow8EsF\"E8I^", 00:17:07.011 "method": "nvmf_create_subsystem", 00:17:07.011 "req_id": 1 00:17:07.011 } 00:17:07.011 Got JSON-RPC error response 00:17:07.011 response: 00:17:07.011 { 00:17:07.011 "code": -32602, 00:17:07.011 "message": "Invalid SN /D5$L!sv\"zow8EsF\"E8I^" 00:17:07.011 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:07.011 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:07.012 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'GAbkF ~b[Ssgb;SO"7ev&>On8u@k+'\''e#Z9ZuS' 00:17:07.013 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'GAbkF ~b[Ssgb;SO"7ev&>On8u@k+'\''e#Z9ZuS' nqn.2016-06.io.spdk:cnode21309 00:17:07.271 [2024-10-01 01:34:47.075165] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21309: invalid model number 'GAbkF ~b[Ssgb;SO"7ev&>On8u@k+'e#Z9ZuS' 00:17:07.271 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:07.271 { 00:17:07.271 "nqn": "nqn.2016-06.io.spdk:cnode21309", 00:17:07.271 "model_number": "GA\u007fbkF ~b[Ssgb;SO\"7ev&>On8u@k+'\''e#Z9ZuS", 00:17:07.271 "method": "nvmf_create_subsystem", 00:17:07.271 "req_id": 1 00:17:07.271 } 00:17:07.271 Got JSON-RPC error response 00:17:07.271 response: 00:17:07.271 { 00:17:07.271 "code": -32602, 00:17:07.271 "message": "Invalid MN GA\u007fbkF ~b[Ssgb;SO\"7ev&>On8u@k+'\''e#Z9ZuS" 00:17:07.271 }' 00:17:07.271 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:07.271 { 00:17:07.271 "nqn": "nqn.2016-06.io.spdk:cnode21309", 00:17:07.271 "model_number": "GA\u007fbkF ~b[Ssgb;SO\"7ev&>On8u@k+'e#Z9ZuS", 00:17:07.271 "method": "nvmf_create_subsystem", 00:17:07.271 "req_id": 1 00:17:07.271 } 00:17:07.271 Got JSON-RPC error response 00:17:07.271 response: 00:17:07.271 { 00:17:07.271 "code": -32602, 00:17:07.271 "message": "Invalid MN GA\u007fbkF ~b[Ssgb;SO\"7ev&>On8u@k+'e#Z9ZuS" 00:17:07.271 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:07.271 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:07.528 [2024-10-01 01:34:47.344116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.528 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:07.785 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:07.785 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:07.785 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:07.785 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:08.043 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:08.301 [2024-10-01 01:34:47.901922] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:08.301 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:08.301 { 00:17:08.301 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:08.301 "listen_address": { 00:17:08.301 "trtype": "tcp", 00:17:08.301 "traddr": "", 00:17:08.301 "trsvcid": "4421" 00:17:08.301 }, 00:17:08.301 "method": "nvmf_subsystem_remove_listener", 00:17:08.301 "req_id": 1 00:17:08.301 } 00:17:08.301 Got JSON-RPC error response 00:17:08.301 response: 00:17:08.301 { 00:17:08.301 "code": -32602, 00:17:08.301 "message": "Invalid parameters" 00:17:08.301 }' 00:17:08.301 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:08.301 { 00:17:08.301 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:08.301 "listen_address": { 00:17:08.301 "trtype": "tcp", 00:17:08.301 "traddr": "", 00:17:08.301 "trsvcid": "4421" 00:17:08.301 }, 00:17:08.301 "method": "nvmf_subsystem_remove_listener", 00:17:08.301 "req_id": 1 00:17:08.301 } 00:17:08.301 Got JSON-RPC error response 00:17:08.301 response: 00:17:08.301 { 00:17:08.301 "code": -32602, 00:17:08.301 "message": "Invalid parameters" 00:17:08.301 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:08.301 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22428 -i 0 00:17:08.558 [2024-10-01 01:34:48.166780] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22428: invalid cntlid range [0-65519] 00:17:08.558 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:08.558 { 00:17:08.558 "nqn": "nqn.2016-06.io.spdk:cnode22428", 00:17:08.558 "min_cntlid": 0, 00:17:08.558 "method": "nvmf_create_subsystem", 00:17:08.558 "req_id": 1 00:17:08.558 } 00:17:08.558 Got JSON-RPC error response 00:17:08.558 response: 00:17:08.558 { 00:17:08.558 "code": -32602, 00:17:08.558 "message": "Invalid cntlid range [0-65519]" 00:17:08.558 }' 00:17:08.558 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:08.558 { 00:17:08.558 "nqn": "nqn.2016-06.io.spdk:cnode22428", 00:17:08.558 "min_cntlid": 0, 00:17:08.558 "method": "nvmf_create_subsystem", 00:17:08.558 "req_id": 1 00:17:08.558 } 00:17:08.558 Got JSON-RPC error response 00:17:08.558 response: 00:17:08.558 { 00:17:08.558 "code": -32602, 00:17:08.558 "message": "Invalid cntlid range [0-65519]" 00:17:08.558 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:08.558 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32395 -i 65520 00:17:08.815 [2024-10-01 01:34:48.443735] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32395: invalid cntlid range [65520-65519] 00:17:08.815 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:08.815 { 00:17:08.815 "nqn": "nqn.2016-06.io.spdk:cnode32395", 00:17:08.815 "min_cntlid": 65520, 00:17:08.815 "method": "nvmf_create_subsystem", 00:17:08.815 "req_id": 1 00:17:08.815 } 00:17:08.815 Got JSON-RPC error response 00:17:08.815 response: 00:17:08.815 { 00:17:08.815 "code": -32602, 00:17:08.815 "message": "Invalid cntlid range [65520-65519]" 00:17:08.815 }' 00:17:08.815 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:08.815 { 00:17:08.815 "nqn": "nqn.2016-06.io.spdk:cnode32395", 00:17:08.815 "min_cntlid": 65520, 00:17:08.815 "method": "nvmf_create_subsystem", 00:17:08.815 "req_id": 1 00:17:08.815 } 00:17:08.815 Got JSON-RPC error response 00:17:08.815 response: 00:17:08.815 { 00:17:08.815 "code": -32602, 00:17:08.815 "message": "Invalid cntlid range [65520-65519]" 00:17:08.815 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:08.815 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21707 -I 0 00:17:09.072 [2024-10-01 01:34:48.712612] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21707: invalid cntlid range [1-0] 00:17:09.072 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:09.072 { 00:17:09.072 "nqn": "nqn.2016-06.io.spdk:cnode21707", 00:17:09.072 "max_cntlid": 0, 00:17:09.072 "method": "nvmf_create_subsystem", 00:17:09.072 "req_id": 1 00:17:09.072 } 00:17:09.072 Got JSON-RPC error response 00:17:09.072 response: 00:17:09.072 { 00:17:09.072 "code": -32602, 00:17:09.072 "message": "Invalid cntlid range [1-0]" 00:17:09.072 }' 00:17:09.072 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:09.072 { 00:17:09.072 "nqn": "nqn.2016-06.io.spdk:cnode21707", 00:17:09.072 "max_cntlid": 0, 00:17:09.072 "method": "nvmf_create_subsystem", 00:17:09.072 "req_id": 1 00:17:09.072 } 00:17:09.072 Got JSON-RPC error response 00:17:09.072 response: 00:17:09.072 { 00:17:09.072 "code": -32602, 00:17:09.072 "message": "Invalid cntlid range [1-0]" 00:17:09.072 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:09.072 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13431 -I 65520 00:17:09.329 [2024-10-01 01:34:48.989565] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13431: invalid cntlid range [1-65520] 00:17:09.329 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:09.329 { 00:17:09.329 "nqn": "nqn.2016-06.io.spdk:cnode13431", 00:17:09.329 "max_cntlid": 65520, 00:17:09.329 "method": "nvmf_create_subsystem", 00:17:09.329 "req_id": 1 00:17:09.329 } 00:17:09.329 Got JSON-RPC error response 00:17:09.329 response: 00:17:09.329 { 00:17:09.329 "code": -32602, 00:17:09.329 "message": "Invalid cntlid range [1-65520]" 00:17:09.329 }' 00:17:09.329 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:09.329 { 00:17:09.329 "nqn": "nqn.2016-06.io.spdk:cnode13431", 00:17:09.329 "max_cntlid": 65520, 00:17:09.329 "method": "nvmf_create_subsystem", 00:17:09.329 "req_id": 1 00:17:09.329 } 00:17:09.329 Got JSON-RPC error response 00:17:09.329 response: 00:17:09.329 { 00:17:09.329 "code": -32602, 00:17:09.329 "message": "Invalid cntlid range [1-65520]" 00:17:09.329 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:09.329 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14826 -i 6 -I 5 00:17:09.586 [2024-10-01 01:34:49.250447] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14826: invalid cntlid range [6-5] 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:09.586 { 00:17:09.586 "nqn": "nqn.2016-06.io.spdk:cnode14826", 00:17:09.586 "min_cntlid": 6, 00:17:09.586 "max_cntlid": 5, 00:17:09.586 "method": "nvmf_create_subsystem", 00:17:09.586 "req_id": 1 00:17:09.586 } 00:17:09.586 Got JSON-RPC error response 00:17:09.586 response: 00:17:09.586 { 00:17:09.586 "code": -32602, 00:17:09.586 "message": "Invalid cntlid range [6-5]" 00:17:09.586 }' 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:09.586 { 00:17:09.586 "nqn": "nqn.2016-06.io.spdk:cnode14826", 00:17:09.586 "min_cntlid": 6, 00:17:09.586 "max_cntlid": 5, 00:17:09.586 "method": "nvmf_create_subsystem", 00:17:09.586 "req_id": 1 00:17:09.586 } 00:17:09.586 Got JSON-RPC error response 00:17:09.586 response: 00:17:09.586 { 00:17:09.586 "code": -32602, 00:17:09.586 "message": "Invalid cntlid range [6-5]" 00:17:09.586 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:09.586 { 00:17:09.586 "name": "foobar", 00:17:09.586 "method": "nvmf_delete_target", 00:17:09.586 "req_id": 1 00:17:09.586 } 00:17:09.586 Got JSON-RPC error response 00:17:09.586 response: 00:17:09.586 { 00:17:09.586 "code": -32602, 00:17:09.586 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:09.586 }' 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:09.586 { 00:17:09.586 "name": "foobar", 00:17:09.586 "method": "nvmf_delete_target", 00:17:09.586 "req_id": 1 00:17:09.586 } 00:17:09.586 Got JSON-RPC error response 00:17:09.586 response: 00:17:09.586 { 00:17:09.586 "code": -32602, 00:17:09.586 "message": "The specified target doesn't exist, cannot delete it." 00:17:09.586 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.586 rmmod nvme_tcp 00:17:09.586 rmmod nvme_fabrics 00:17:09.586 rmmod nvme_keyring 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 877019 ']' 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 877019 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 877019 ']' 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 877019 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.586 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 877019 00:17:09.844 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.844 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.844 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 877019' 00:17:09.844 killing process with pid 877019 00:17:09.844 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 877019 00:17:09.844 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 877019 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.104 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.105 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.057 00:17:12.057 real 0m9.070s 00:17:12.057 user 0m21.277s 00:17:12.057 sys 0m2.555s 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.057 ************************************ 00:17:12.057 END TEST nvmf_invalid 00:17:12.057 ************************************ 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.057 ************************************ 00:17:12.057 START TEST nvmf_connect_stress 00:17:12.057 ************************************ 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:12.057 * Looking for test storage... 00:17:12.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:12.057 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:12.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.316 --rc genhtml_branch_coverage=1 00:17:12.316 --rc genhtml_function_coverage=1 00:17:12.316 --rc genhtml_legend=1 00:17:12.316 --rc geninfo_all_blocks=1 00:17:12.316 --rc geninfo_unexecuted_blocks=1 00:17:12.316 00:17:12.316 ' 00:17:12.316 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:12.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.317 --rc genhtml_branch_coverage=1 00:17:12.317 --rc genhtml_function_coverage=1 00:17:12.317 --rc genhtml_legend=1 00:17:12.317 --rc geninfo_all_blocks=1 00:17:12.317 --rc geninfo_unexecuted_blocks=1 00:17:12.317 00:17:12.317 ' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:12.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.317 --rc genhtml_branch_coverage=1 00:17:12.317 --rc genhtml_function_coverage=1 00:17:12.317 --rc genhtml_legend=1 00:17:12.317 --rc geninfo_all_blocks=1 00:17:12.317 --rc geninfo_unexecuted_blocks=1 00:17:12.317 00:17:12.317 ' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:12.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.317 --rc genhtml_branch_coverage=1 00:17:12.317 --rc genhtml_function_coverage=1 00:17:12.317 --rc genhtml_legend=1 00:17:12.317 --rc geninfo_all_blocks=1 00:17:12.317 --rc geninfo_unexecuted_blocks=1 00:17:12.317 00:17:12.317 ' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.317 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.223 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:14.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:14.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:14.224 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:14.224 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:14.224 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.225 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.225 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.225 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.225 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.225 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:14.484 00:17:14.484 --- 10.0.0.2 ping statistics --- 00:17:14.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.484 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:17:14.484 00:17:14.484 --- 10.0.0.1 ping statistics --- 00:17:14.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.484 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:14.484 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=879660 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 879660 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 879660 ']' 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.485 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.485 [2024-10-01 01:34:54.208342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:14.485 [2024-10-01 01:34:54.208423] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.485 [2024-10-01 01:34:54.280619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.744 [2024-10-01 01:34:54.380197] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.744 [2024-10-01 01:34:54.380267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.744 [2024-10-01 01:34:54.380293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.744 [2024-10-01 01:34:54.380314] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.744 [2024-10-01 01:34:54.380332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.744 [2024-10-01 01:34:54.380435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.744 [2024-10-01 01:34:54.380506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.744 [2024-10-01 01:34:54.380497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 [2024-10-01 01:34:54.535084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 [2024-10-01 01:34:54.562482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 NULL1 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=879708 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.744 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.003 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.003 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.003 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.003 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.003 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.003 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.004 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.261 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.261 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:15.261 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.261 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.261 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.519 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.519 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:15.519 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.519 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.519 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.776 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.776 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:15.776 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.776 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.776 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.340 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.340 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:16.340 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.340 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.340 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.597 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:16.597 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.597 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.597 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.855 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.855 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:16.855 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.855 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.855 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.112 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.112 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:17.112 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.112 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.112 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.370 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.370 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:17.370 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.370 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.370 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.934 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.934 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:17.934 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.934 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.934 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.191 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.191 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:18.191 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.191 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.191 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.448 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.448 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:18.448 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.448 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.448 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.705 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.705 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:18.705 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.705 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.705 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.962 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.962 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:18.962 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.962 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.962 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.526 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.526 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:19.526 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.527 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.527 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.783 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.783 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:19.783 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.783 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.783 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:20.040 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.040 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.298 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.298 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:20.298 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.298 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.298 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.863 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.863 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:20.863 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.863 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.863 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.121 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.121 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:21.121 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.121 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.121 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.378 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.378 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:21.378 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.378 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.378 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.635 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.636 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:21.636 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.636 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.636 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.893 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.894 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:21.894 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.894 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.894 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.457 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.457 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:22.457 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.457 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.457 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.713 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.713 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:22.713 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.713 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.713 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.971 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.971 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:22.971 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.971 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.971 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.228 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.228 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:23.228 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.228 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.228 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.485 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.485 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:23.485 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.485 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.485 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.049 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.049 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:24.049 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.049 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.049 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.306 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.306 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:24.306 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.306 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.306 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.563 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.563 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:24.563 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.563 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.563 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.820 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.820 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:24.820 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.820 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.820 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.820 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 879708 00:17:25.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (879708) - No such process 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 879708 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.078 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.078 rmmod nvme_tcp 00:17:25.078 rmmod nvme_fabrics 00:17:25.336 rmmod nvme_keyring 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 879660 ']' 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 879660 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 879660 ']' 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 879660 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.336 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 879660 00:17:25.336 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.336 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.336 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 879660' 00:17:25.336 killing process with pid 879660 00:17:25.336 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 879660 00:17:25.336 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 879660 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.593 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.495 00:17:27.495 real 0m15.505s 00:17:27.495 user 0m38.750s 00:17:27.495 sys 0m5.822s 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.495 ************************************ 00:17:27.495 END TEST nvmf_connect_stress 00:17:27.495 ************************************ 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.495 01:35:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.754 ************************************ 00:17:27.754 START TEST nvmf_fused_ordering 00:17:27.754 ************************************ 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.754 * Looking for test storage... 00:17:27.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.754 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:27.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.754 --rc genhtml_branch_coverage=1 00:17:27.754 --rc genhtml_function_coverage=1 00:17:27.754 --rc genhtml_legend=1 00:17:27.755 --rc geninfo_all_blocks=1 00:17:27.755 --rc geninfo_unexecuted_blocks=1 00:17:27.755 00:17:27.755 ' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.755 --rc genhtml_branch_coverage=1 00:17:27.755 --rc genhtml_function_coverage=1 00:17:27.755 --rc genhtml_legend=1 00:17:27.755 --rc geninfo_all_blocks=1 00:17:27.755 --rc geninfo_unexecuted_blocks=1 00:17:27.755 00:17:27.755 ' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.755 --rc genhtml_branch_coverage=1 00:17:27.755 --rc genhtml_function_coverage=1 00:17:27.755 --rc genhtml_legend=1 00:17:27.755 --rc geninfo_all_blocks=1 00:17:27.755 --rc geninfo_unexecuted_blocks=1 00:17:27.755 00:17:27.755 ' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:27.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.755 --rc genhtml_branch_coverage=1 00:17:27.755 --rc genhtml_function_coverage=1 00:17:27.755 --rc genhtml_legend=1 00:17:27.755 --rc geninfo_all_blocks=1 00:17:27.755 --rc geninfo_unexecuted_blocks=1 00:17:27.755 00:17:27.755 ' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.755 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.286 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.286 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:30.286 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:30.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:30.287 00:17:30.287 --- 10.0.0.2 ping statistics --- 00:17:30.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.287 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:17:30.287 00:17:30.287 --- 10.0.0.1 ping statistics --- 00:17:30.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.287 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=882960 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 882960 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 882960 ']' 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.287 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 [2024-10-01 01:35:09.788993] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:30.287 [2024-10-01 01:35:09.789078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.287 [2024-10-01 01:35:09.863514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.287 [2024-10-01 01:35:09.956289] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.287 [2024-10-01 01:35:09.956358] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.287 [2024-10-01 01:35:09.956375] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.287 [2024-10-01 01:35:09.956388] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.287 [2024-10-01 01:35:09.956400] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.287 [2024-10-01 01:35:09.956441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 [2024-10-01 01:35:10.102960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 [2024-10-01 01:35:10.119227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.287 NULL1 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.287 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.545 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.545 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:30.545 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.545 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.545 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.545 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:30.545 [2024-10-01 01:35:10.164563] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:30.545 [2024-10-01 01:35:10.164604] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882991 ] 00:17:30.802 Attached to nqn.2016-06.io.spdk:cnode1 00:17:30.802 Namespace ID: 1 size: 1GB 00:17:30.802 fused_ordering(0) 00:17:30.802 fused_ordering(1) 00:17:30.802 fused_ordering(2) 00:17:30.802 fused_ordering(3) 00:17:30.802 fused_ordering(4) 00:17:30.802 fused_ordering(5) 00:17:30.802 fused_ordering(6) 00:17:30.802 fused_ordering(7) 00:17:30.802 fused_ordering(8) 00:17:30.802 fused_ordering(9) 00:17:30.802 fused_ordering(10) 00:17:30.802 fused_ordering(11) 00:17:30.802 fused_ordering(12) 00:17:30.802 fused_ordering(13) 00:17:30.802 fused_ordering(14) 00:17:30.802 fused_ordering(15) 00:17:30.802 fused_ordering(16) 00:17:30.802 fused_ordering(17) 00:17:30.802 fused_ordering(18) 00:17:30.802 fused_ordering(19) 00:17:30.802 fused_ordering(20) 00:17:30.802 fused_ordering(21) 00:17:30.802 fused_ordering(22) 00:17:30.802 fused_ordering(23) 00:17:30.802 fused_ordering(24) 00:17:30.802 fused_ordering(25) 00:17:30.802 fused_ordering(26) 00:17:30.802 fused_ordering(27) 00:17:30.802 fused_ordering(28) 00:17:30.802 fused_ordering(29) 00:17:30.802 fused_ordering(30) 00:17:30.802 fused_ordering(31) 00:17:30.802 fused_ordering(32) 00:17:30.802 fused_ordering(33) 00:17:30.802 fused_ordering(34) 00:17:30.802 fused_ordering(35) 00:17:30.802 fused_ordering(36) 00:17:30.802 fused_ordering(37) 00:17:30.802 fused_ordering(38) 00:17:30.802 fused_ordering(39) 00:17:30.802 fused_ordering(40) 00:17:30.802 fused_ordering(41) 00:17:30.802 fused_ordering(42) 00:17:30.802 fused_ordering(43) 00:17:30.802 fused_ordering(44) 00:17:30.802 fused_ordering(45) 00:17:30.802 fused_ordering(46) 00:17:30.802 fused_ordering(47) 00:17:30.802 fused_ordering(48) 00:17:30.802 fused_ordering(49) 00:17:30.802 fused_ordering(50) 00:17:30.802 fused_ordering(51) 00:17:30.802 fused_ordering(52) 00:17:30.802 fused_ordering(53) 00:17:30.802 fused_ordering(54) 00:17:30.802 fused_ordering(55) 00:17:30.802 fused_ordering(56) 00:17:30.802 fused_ordering(57) 00:17:30.802 fused_ordering(58) 00:17:30.802 fused_ordering(59) 00:17:30.802 fused_ordering(60) 00:17:30.802 fused_ordering(61) 00:17:30.802 fused_ordering(62) 00:17:30.802 fused_ordering(63) 00:17:30.802 fused_ordering(64) 00:17:30.802 fused_ordering(65) 00:17:30.802 fused_ordering(66) 00:17:30.802 fused_ordering(67) 00:17:30.802 fused_ordering(68) 00:17:30.802 fused_ordering(69) 00:17:30.802 fused_ordering(70) 00:17:30.802 fused_ordering(71) 00:17:30.802 fused_ordering(72) 00:17:30.802 fused_ordering(73) 00:17:30.802 fused_ordering(74) 00:17:30.802 fused_ordering(75) 00:17:30.802 fused_ordering(76) 00:17:30.802 fused_ordering(77) 00:17:30.802 fused_ordering(78) 00:17:30.802 fused_ordering(79) 00:17:30.802 fused_ordering(80) 00:17:30.802 fused_ordering(81) 00:17:30.802 fused_ordering(82) 00:17:30.802 fused_ordering(83) 00:17:30.802 fused_ordering(84) 00:17:30.802 fused_ordering(85) 00:17:30.802 fused_ordering(86) 00:17:30.802 fused_ordering(87) 00:17:30.803 fused_ordering(88) 00:17:30.803 fused_ordering(89) 00:17:30.803 fused_ordering(90) 00:17:30.803 fused_ordering(91) 00:17:30.803 fused_ordering(92) 00:17:30.803 fused_ordering(93) 00:17:30.803 fused_ordering(94) 00:17:30.803 fused_ordering(95) 00:17:30.803 fused_ordering(96) 00:17:30.803 fused_ordering(97) 00:17:30.803 fused_ordering(98) 00:17:30.803 fused_ordering(99) 00:17:30.803 fused_ordering(100) 00:17:30.803 fused_ordering(101) 00:17:30.803 fused_ordering(102) 00:17:30.803 fused_ordering(103) 00:17:30.803 fused_ordering(104) 00:17:30.803 fused_ordering(105) 00:17:30.803 fused_ordering(106) 00:17:30.803 fused_ordering(107) 00:17:30.803 fused_ordering(108) 00:17:30.803 fused_ordering(109) 00:17:30.803 fused_ordering(110) 00:17:30.803 fused_ordering(111) 00:17:30.803 fused_ordering(112) 00:17:30.803 fused_ordering(113) 00:17:30.803 fused_ordering(114) 00:17:30.803 fused_ordering(115) 00:17:30.803 fused_ordering(116) 00:17:30.803 fused_ordering(117) 00:17:30.803 fused_ordering(118) 00:17:30.803 fused_ordering(119) 00:17:30.803 fused_ordering(120) 00:17:30.803 fused_ordering(121) 00:17:30.803 fused_ordering(122) 00:17:30.803 fused_ordering(123) 00:17:30.803 fused_ordering(124) 00:17:30.803 fused_ordering(125) 00:17:30.803 fused_ordering(126) 00:17:30.803 fused_ordering(127) 00:17:30.803 fused_ordering(128) 00:17:30.803 fused_ordering(129) 00:17:30.803 fused_ordering(130) 00:17:30.803 fused_ordering(131) 00:17:30.803 fused_ordering(132) 00:17:30.803 fused_ordering(133) 00:17:30.803 fused_ordering(134) 00:17:30.803 fused_ordering(135) 00:17:30.803 fused_ordering(136) 00:17:30.803 fused_ordering(137) 00:17:30.803 fused_ordering(138) 00:17:30.803 fused_ordering(139) 00:17:30.803 fused_ordering(140) 00:17:30.803 fused_ordering(141) 00:17:30.803 fused_ordering(142) 00:17:30.803 fused_ordering(143) 00:17:30.803 fused_ordering(144) 00:17:30.803 fused_ordering(145) 00:17:30.803 fused_ordering(146) 00:17:30.803 fused_ordering(147) 00:17:30.803 fused_ordering(148) 00:17:30.803 fused_ordering(149) 00:17:30.803 fused_ordering(150) 00:17:30.803 fused_ordering(151) 00:17:30.803 fused_ordering(152) 00:17:30.803 fused_ordering(153) 00:17:30.803 fused_ordering(154) 00:17:30.803 fused_ordering(155) 00:17:30.803 fused_ordering(156) 00:17:30.803 fused_ordering(157) 00:17:30.803 fused_ordering(158) 00:17:30.803 fused_ordering(159) 00:17:30.803 fused_ordering(160) 00:17:30.803 fused_ordering(161) 00:17:30.803 fused_ordering(162) 00:17:30.803 fused_ordering(163) 00:17:30.803 fused_ordering(164) 00:17:30.803 fused_ordering(165) 00:17:30.803 fused_ordering(166) 00:17:30.803 fused_ordering(167) 00:17:30.803 fused_ordering(168) 00:17:30.803 fused_ordering(169) 00:17:30.803 fused_ordering(170) 00:17:30.803 fused_ordering(171) 00:17:30.803 fused_ordering(172) 00:17:30.803 fused_ordering(173) 00:17:30.803 fused_ordering(174) 00:17:30.803 fused_ordering(175) 00:17:30.803 fused_ordering(176) 00:17:30.803 fused_ordering(177) 00:17:30.803 fused_ordering(178) 00:17:30.803 fused_ordering(179) 00:17:30.803 fused_ordering(180) 00:17:30.803 fused_ordering(181) 00:17:30.803 fused_ordering(182) 00:17:30.803 fused_ordering(183) 00:17:30.803 fused_ordering(184) 00:17:30.803 fused_ordering(185) 00:17:30.803 fused_ordering(186) 00:17:30.803 fused_ordering(187) 00:17:30.803 fused_ordering(188) 00:17:30.803 fused_ordering(189) 00:17:30.803 fused_ordering(190) 00:17:30.803 fused_ordering(191) 00:17:30.803 fused_ordering(192) 00:17:30.803 fused_ordering(193) 00:17:30.803 fused_ordering(194) 00:17:30.803 fused_ordering(195) 00:17:30.803 fused_ordering(196) 00:17:30.803 fused_ordering(197) 00:17:30.803 fused_ordering(198) 00:17:30.803 fused_ordering(199) 00:17:30.803 fused_ordering(200) 00:17:30.803 fused_ordering(201) 00:17:30.803 fused_ordering(202) 00:17:30.803 fused_ordering(203) 00:17:30.803 fused_ordering(204) 00:17:30.803 fused_ordering(205) 00:17:31.367 fused_ordering(206) 00:17:31.367 fused_ordering(207) 00:17:31.367 fused_ordering(208) 00:17:31.367 fused_ordering(209) 00:17:31.367 fused_ordering(210) 00:17:31.367 fused_ordering(211) 00:17:31.367 fused_ordering(212) 00:17:31.367 fused_ordering(213) 00:17:31.367 fused_ordering(214) 00:17:31.367 fused_ordering(215) 00:17:31.367 fused_ordering(216) 00:17:31.367 fused_ordering(217) 00:17:31.367 fused_ordering(218) 00:17:31.367 fused_ordering(219) 00:17:31.367 fused_ordering(220) 00:17:31.367 fused_ordering(221) 00:17:31.367 fused_ordering(222) 00:17:31.367 fused_ordering(223) 00:17:31.367 fused_ordering(224) 00:17:31.367 fused_ordering(225) 00:17:31.367 fused_ordering(226) 00:17:31.367 fused_ordering(227) 00:17:31.367 fused_ordering(228) 00:17:31.367 fused_ordering(229) 00:17:31.367 fused_ordering(230) 00:17:31.367 fused_ordering(231) 00:17:31.367 fused_ordering(232) 00:17:31.367 fused_ordering(233) 00:17:31.367 fused_ordering(234) 00:17:31.367 fused_ordering(235) 00:17:31.367 fused_ordering(236) 00:17:31.367 fused_ordering(237) 00:17:31.367 fused_ordering(238) 00:17:31.367 fused_ordering(239) 00:17:31.367 fused_ordering(240) 00:17:31.367 fused_ordering(241) 00:17:31.367 fused_ordering(242) 00:17:31.367 fused_ordering(243) 00:17:31.367 fused_ordering(244) 00:17:31.367 fused_ordering(245) 00:17:31.367 fused_ordering(246) 00:17:31.367 fused_ordering(247) 00:17:31.367 fused_ordering(248) 00:17:31.367 fused_ordering(249) 00:17:31.367 fused_ordering(250) 00:17:31.367 fused_ordering(251) 00:17:31.367 fused_ordering(252) 00:17:31.367 fused_ordering(253) 00:17:31.367 fused_ordering(254) 00:17:31.367 fused_ordering(255) 00:17:31.367 fused_ordering(256) 00:17:31.367 fused_ordering(257) 00:17:31.367 fused_ordering(258) 00:17:31.367 fused_ordering(259) 00:17:31.367 fused_ordering(260) 00:17:31.367 fused_ordering(261) 00:17:31.367 fused_ordering(262) 00:17:31.367 fused_ordering(263) 00:17:31.367 fused_ordering(264) 00:17:31.367 fused_ordering(265) 00:17:31.367 fused_ordering(266) 00:17:31.367 fused_ordering(267) 00:17:31.367 fused_ordering(268) 00:17:31.367 fused_ordering(269) 00:17:31.367 fused_ordering(270) 00:17:31.367 fused_ordering(271) 00:17:31.367 fused_ordering(272) 00:17:31.367 fused_ordering(273) 00:17:31.367 fused_ordering(274) 00:17:31.367 fused_ordering(275) 00:17:31.367 fused_ordering(276) 00:17:31.367 fused_ordering(277) 00:17:31.367 fused_ordering(278) 00:17:31.367 fused_ordering(279) 00:17:31.367 fused_ordering(280) 00:17:31.367 fused_ordering(281) 00:17:31.367 fused_ordering(282) 00:17:31.367 fused_ordering(283) 00:17:31.367 fused_ordering(284) 00:17:31.367 fused_ordering(285) 00:17:31.367 fused_ordering(286) 00:17:31.367 fused_ordering(287) 00:17:31.367 fused_ordering(288) 00:17:31.367 fused_ordering(289) 00:17:31.367 fused_ordering(290) 00:17:31.367 fused_ordering(291) 00:17:31.367 fused_ordering(292) 00:17:31.367 fused_ordering(293) 00:17:31.367 fused_ordering(294) 00:17:31.367 fused_ordering(295) 00:17:31.367 fused_ordering(296) 00:17:31.367 fused_ordering(297) 00:17:31.367 fused_ordering(298) 00:17:31.367 fused_ordering(299) 00:17:31.367 fused_ordering(300) 00:17:31.367 fused_ordering(301) 00:17:31.367 fused_ordering(302) 00:17:31.367 fused_ordering(303) 00:17:31.367 fused_ordering(304) 00:17:31.367 fused_ordering(305) 00:17:31.367 fused_ordering(306) 00:17:31.367 fused_ordering(307) 00:17:31.367 fused_ordering(308) 00:17:31.367 fused_ordering(309) 00:17:31.367 fused_ordering(310) 00:17:31.367 fused_ordering(311) 00:17:31.367 fused_ordering(312) 00:17:31.367 fused_ordering(313) 00:17:31.367 fused_ordering(314) 00:17:31.367 fused_ordering(315) 00:17:31.367 fused_ordering(316) 00:17:31.367 fused_ordering(317) 00:17:31.367 fused_ordering(318) 00:17:31.367 fused_ordering(319) 00:17:31.367 fused_ordering(320) 00:17:31.367 fused_ordering(321) 00:17:31.367 fused_ordering(322) 00:17:31.367 fused_ordering(323) 00:17:31.367 fused_ordering(324) 00:17:31.367 fused_ordering(325) 00:17:31.367 fused_ordering(326) 00:17:31.367 fused_ordering(327) 00:17:31.367 fused_ordering(328) 00:17:31.367 fused_ordering(329) 00:17:31.367 fused_ordering(330) 00:17:31.367 fused_ordering(331) 00:17:31.367 fused_ordering(332) 00:17:31.367 fused_ordering(333) 00:17:31.367 fused_ordering(334) 00:17:31.367 fused_ordering(335) 00:17:31.367 fused_ordering(336) 00:17:31.367 fused_ordering(337) 00:17:31.367 fused_ordering(338) 00:17:31.367 fused_ordering(339) 00:17:31.367 fused_ordering(340) 00:17:31.367 fused_ordering(341) 00:17:31.367 fused_ordering(342) 00:17:31.367 fused_ordering(343) 00:17:31.367 fused_ordering(344) 00:17:31.367 fused_ordering(345) 00:17:31.367 fused_ordering(346) 00:17:31.367 fused_ordering(347) 00:17:31.367 fused_ordering(348) 00:17:31.367 fused_ordering(349) 00:17:31.367 fused_ordering(350) 00:17:31.367 fused_ordering(351) 00:17:31.368 fused_ordering(352) 00:17:31.368 fused_ordering(353) 00:17:31.368 fused_ordering(354) 00:17:31.368 fused_ordering(355) 00:17:31.368 fused_ordering(356) 00:17:31.368 fused_ordering(357) 00:17:31.368 fused_ordering(358) 00:17:31.368 fused_ordering(359) 00:17:31.368 fused_ordering(360) 00:17:31.368 fused_ordering(361) 00:17:31.368 fused_ordering(362) 00:17:31.368 fused_ordering(363) 00:17:31.368 fused_ordering(364) 00:17:31.368 fused_ordering(365) 00:17:31.368 fused_ordering(366) 00:17:31.368 fused_ordering(367) 00:17:31.368 fused_ordering(368) 00:17:31.368 fused_ordering(369) 00:17:31.368 fused_ordering(370) 00:17:31.368 fused_ordering(371) 00:17:31.368 fused_ordering(372) 00:17:31.368 fused_ordering(373) 00:17:31.368 fused_ordering(374) 00:17:31.368 fused_ordering(375) 00:17:31.368 fused_ordering(376) 00:17:31.368 fused_ordering(377) 00:17:31.368 fused_ordering(378) 00:17:31.368 fused_ordering(379) 00:17:31.368 fused_ordering(380) 00:17:31.368 fused_ordering(381) 00:17:31.368 fused_ordering(382) 00:17:31.368 fused_ordering(383) 00:17:31.368 fused_ordering(384) 00:17:31.368 fused_ordering(385) 00:17:31.368 fused_ordering(386) 00:17:31.368 fused_ordering(387) 00:17:31.368 fused_ordering(388) 00:17:31.368 fused_ordering(389) 00:17:31.368 fused_ordering(390) 00:17:31.368 fused_ordering(391) 00:17:31.368 fused_ordering(392) 00:17:31.368 fused_ordering(393) 00:17:31.368 fused_ordering(394) 00:17:31.368 fused_ordering(395) 00:17:31.368 fused_ordering(396) 00:17:31.368 fused_ordering(397) 00:17:31.368 fused_ordering(398) 00:17:31.368 fused_ordering(399) 00:17:31.368 fused_ordering(400) 00:17:31.368 fused_ordering(401) 00:17:31.368 fused_ordering(402) 00:17:31.368 fused_ordering(403) 00:17:31.368 fused_ordering(404) 00:17:31.368 fused_ordering(405) 00:17:31.368 fused_ordering(406) 00:17:31.368 fused_ordering(407) 00:17:31.368 fused_ordering(408) 00:17:31.368 fused_ordering(409) 00:17:31.368 fused_ordering(410) 00:17:31.933 fused_ordering(411) 00:17:31.933 fused_ordering(412) 00:17:31.933 fused_ordering(413) 00:17:31.933 fused_ordering(414) 00:17:31.933 fused_ordering(415) 00:17:31.933 fused_ordering(416) 00:17:31.933 fused_ordering(417) 00:17:31.933 fused_ordering(418) 00:17:31.933 fused_ordering(419) 00:17:31.933 fused_ordering(420) 00:17:31.933 fused_ordering(421) 00:17:31.933 fused_ordering(422) 00:17:31.933 fused_ordering(423) 00:17:31.933 fused_ordering(424) 00:17:31.933 fused_ordering(425) 00:17:31.933 fused_ordering(426) 00:17:31.933 fused_ordering(427) 00:17:31.933 fused_ordering(428) 00:17:31.933 fused_ordering(429) 00:17:31.933 fused_ordering(430) 00:17:31.933 fused_ordering(431) 00:17:31.933 fused_ordering(432) 00:17:31.933 fused_ordering(433) 00:17:31.933 fused_ordering(434) 00:17:31.933 fused_ordering(435) 00:17:31.933 fused_ordering(436) 00:17:31.933 fused_ordering(437) 00:17:31.933 fused_ordering(438) 00:17:31.933 fused_ordering(439) 00:17:31.933 fused_ordering(440) 00:17:31.933 fused_ordering(441) 00:17:31.933 fused_ordering(442) 00:17:31.933 fused_ordering(443) 00:17:31.933 fused_ordering(444) 00:17:31.933 fused_ordering(445) 00:17:31.933 fused_ordering(446) 00:17:31.933 fused_ordering(447) 00:17:31.933 fused_ordering(448) 00:17:31.933 fused_ordering(449) 00:17:31.933 fused_ordering(450) 00:17:31.933 fused_ordering(451) 00:17:31.933 fused_ordering(452) 00:17:31.933 fused_ordering(453) 00:17:31.933 fused_ordering(454) 00:17:31.933 fused_ordering(455) 00:17:31.933 fused_ordering(456) 00:17:31.933 fused_ordering(457) 00:17:31.933 fused_ordering(458) 00:17:31.933 fused_ordering(459) 00:17:31.933 fused_ordering(460) 00:17:31.933 fused_ordering(461) 00:17:31.933 fused_ordering(462) 00:17:31.933 fused_ordering(463) 00:17:31.933 fused_ordering(464) 00:17:31.933 fused_ordering(465) 00:17:31.933 fused_ordering(466) 00:17:31.933 fused_ordering(467) 00:17:31.933 fused_ordering(468) 00:17:31.933 fused_ordering(469) 00:17:31.933 fused_ordering(470) 00:17:31.933 fused_ordering(471) 00:17:31.933 fused_ordering(472) 00:17:31.933 fused_ordering(473) 00:17:31.933 fused_ordering(474) 00:17:31.933 fused_ordering(475) 00:17:31.933 fused_ordering(476) 00:17:31.933 fused_ordering(477) 00:17:31.933 fused_ordering(478) 00:17:31.933 fused_ordering(479) 00:17:31.933 fused_ordering(480) 00:17:31.933 fused_ordering(481) 00:17:31.933 fused_ordering(482) 00:17:31.933 fused_ordering(483) 00:17:31.933 fused_ordering(484) 00:17:31.933 fused_ordering(485) 00:17:31.933 fused_ordering(486) 00:17:31.933 fused_ordering(487) 00:17:31.933 fused_ordering(488) 00:17:31.933 fused_ordering(489) 00:17:31.933 fused_ordering(490) 00:17:31.933 fused_ordering(491) 00:17:31.933 fused_ordering(492) 00:17:31.933 fused_ordering(493) 00:17:31.933 fused_ordering(494) 00:17:31.933 fused_ordering(495) 00:17:31.933 fused_ordering(496) 00:17:31.933 fused_ordering(497) 00:17:31.933 fused_ordering(498) 00:17:31.933 fused_ordering(499) 00:17:31.933 fused_ordering(500) 00:17:31.933 fused_ordering(501) 00:17:31.933 fused_ordering(502) 00:17:31.933 fused_ordering(503) 00:17:31.933 fused_ordering(504) 00:17:31.933 fused_ordering(505) 00:17:31.933 fused_ordering(506) 00:17:31.933 fused_ordering(507) 00:17:31.933 fused_ordering(508) 00:17:31.933 fused_ordering(509) 00:17:31.933 fused_ordering(510) 00:17:31.933 fused_ordering(511) 00:17:31.933 fused_ordering(512) 00:17:31.933 fused_ordering(513) 00:17:31.933 fused_ordering(514) 00:17:31.933 fused_ordering(515) 00:17:31.933 fused_ordering(516) 00:17:31.933 fused_ordering(517) 00:17:31.933 fused_ordering(518) 00:17:31.933 fused_ordering(519) 00:17:31.933 fused_ordering(520) 00:17:31.933 fused_ordering(521) 00:17:31.933 fused_ordering(522) 00:17:31.933 fused_ordering(523) 00:17:31.933 fused_ordering(524) 00:17:31.933 fused_ordering(525) 00:17:31.933 fused_ordering(526) 00:17:31.933 fused_ordering(527) 00:17:31.933 fused_ordering(528) 00:17:31.933 fused_ordering(529) 00:17:31.933 fused_ordering(530) 00:17:31.933 fused_ordering(531) 00:17:31.933 fused_ordering(532) 00:17:31.933 fused_ordering(533) 00:17:31.933 fused_ordering(534) 00:17:31.933 fused_ordering(535) 00:17:31.933 fused_ordering(536) 00:17:31.933 fused_ordering(537) 00:17:31.933 fused_ordering(538) 00:17:31.933 fused_ordering(539) 00:17:31.933 fused_ordering(540) 00:17:31.933 fused_ordering(541) 00:17:31.933 fused_ordering(542) 00:17:31.933 fused_ordering(543) 00:17:31.933 fused_ordering(544) 00:17:31.933 fused_ordering(545) 00:17:31.933 fused_ordering(546) 00:17:31.933 fused_ordering(547) 00:17:31.933 fused_ordering(548) 00:17:31.933 fused_ordering(549) 00:17:31.933 fused_ordering(550) 00:17:31.933 fused_ordering(551) 00:17:31.933 fused_ordering(552) 00:17:31.933 fused_ordering(553) 00:17:31.933 fused_ordering(554) 00:17:31.933 fused_ordering(555) 00:17:31.933 fused_ordering(556) 00:17:31.933 fused_ordering(557) 00:17:31.933 fused_ordering(558) 00:17:31.933 fused_ordering(559) 00:17:31.933 fused_ordering(560) 00:17:31.933 fused_ordering(561) 00:17:31.933 fused_ordering(562) 00:17:31.933 fused_ordering(563) 00:17:31.933 fused_ordering(564) 00:17:31.933 fused_ordering(565) 00:17:31.933 fused_ordering(566) 00:17:31.933 fused_ordering(567) 00:17:31.933 fused_ordering(568) 00:17:31.933 fused_ordering(569) 00:17:31.933 fused_ordering(570) 00:17:31.933 fused_ordering(571) 00:17:31.933 fused_ordering(572) 00:17:31.933 fused_ordering(573) 00:17:31.933 fused_ordering(574) 00:17:31.933 fused_ordering(575) 00:17:31.933 fused_ordering(576) 00:17:31.933 fused_ordering(577) 00:17:31.933 fused_ordering(578) 00:17:31.933 fused_ordering(579) 00:17:31.933 fused_ordering(580) 00:17:31.933 fused_ordering(581) 00:17:31.933 fused_ordering(582) 00:17:31.933 fused_ordering(583) 00:17:31.933 fused_ordering(584) 00:17:31.933 fused_ordering(585) 00:17:31.933 fused_ordering(586) 00:17:31.933 fused_ordering(587) 00:17:31.933 fused_ordering(588) 00:17:31.933 fused_ordering(589) 00:17:31.933 fused_ordering(590) 00:17:31.933 fused_ordering(591) 00:17:31.933 fused_ordering(592) 00:17:31.933 fused_ordering(593) 00:17:31.933 fused_ordering(594) 00:17:31.933 fused_ordering(595) 00:17:31.933 fused_ordering(596) 00:17:31.933 fused_ordering(597) 00:17:31.933 fused_ordering(598) 00:17:31.933 fused_ordering(599) 00:17:31.933 fused_ordering(600) 00:17:31.933 fused_ordering(601) 00:17:31.933 fused_ordering(602) 00:17:31.933 fused_ordering(603) 00:17:31.933 fused_ordering(604) 00:17:31.933 fused_ordering(605) 00:17:31.933 fused_ordering(606) 00:17:31.933 fused_ordering(607) 00:17:31.933 fused_ordering(608) 00:17:31.933 fused_ordering(609) 00:17:31.933 fused_ordering(610) 00:17:31.933 fused_ordering(611) 00:17:31.933 fused_ordering(612) 00:17:31.933 fused_ordering(613) 00:17:31.933 fused_ordering(614) 00:17:31.933 fused_ordering(615) 00:17:32.498 fused_ordering(616) 00:17:32.498 fused_ordering(617) 00:17:32.498 fused_ordering(618) 00:17:32.498 fused_ordering(619) 00:17:32.498 fused_ordering(620) 00:17:32.498 fused_ordering(621) 00:17:32.498 fused_ordering(622) 00:17:32.498 fused_ordering(623) 00:17:32.498 fused_ordering(624) 00:17:32.498 fused_ordering(625) 00:17:32.498 fused_ordering(626) 00:17:32.498 fused_ordering(627) 00:17:32.498 fused_ordering(628) 00:17:32.498 fused_ordering(629) 00:17:32.498 fused_ordering(630) 00:17:32.498 fused_ordering(631) 00:17:32.498 fused_ordering(632) 00:17:32.498 fused_ordering(633) 00:17:32.498 fused_ordering(634) 00:17:32.498 fused_ordering(635) 00:17:32.498 fused_ordering(636) 00:17:32.498 fused_ordering(637) 00:17:32.498 fused_ordering(638) 00:17:32.498 fused_ordering(639) 00:17:32.498 fused_ordering(640) 00:17:32.498 fused_ordering(641) 00:17:32.498 fused_ordering(642) 00:17:32.498 fused_ordering(643) 00:17:32.498 fused_ordering(644) 00:17:32.498 fused_ordering(645) 00:17:32.498 fused_ordering(646) 00:17:32.498 fused_ordering(647) 00:17:32.498 fused_ordering(648) 00:17:32.498 fused_ordering(649) 00:17:32.498 fused_ordering(650) 00:17:32.499 fused_ordering(651) 00:17:32.499 fused_ordering(652) 00:17:32.499 fused_ordering(653) 00:17:32.499 fused_ordering(654) 00:17:32.499 fused_ordering(655) 00:17:32.499 fused_ordering(656) 00:17:32.499 fused_ordering(657) 00:17:32.499 fused_ordering(658) 00:17:32.499 fused_ordering(659) 00:17:32.499 fused_ordering(660) 00:17:32.499 fused_ordering(661) 00:17:32.499 fused_ordering(662) 00:17:32.499 fused_ordering(663) 00:17:32.499 fused_ordering(664) 00:17:32.499 fused_ordering(665) 00:17:32.499 fused_ordering(666) 00:17:32.499 fused_ordering(667) 00:17:32.499 fused_ordering(668) 00:17:32.499 fused_ordering(669) 00:17:32.499 fused_ordering(670) 00:17:32.499 fused_ordering(671) 00:17:32.499 fused_ordering(672) 00:17:32.499 fused_ordering(673) 00:17:32.499 fused_ordering(674) 00:17:32.499 fused_ordering(675) 00:17:32.499 fused_ordering(676) 00:17:32.499 fused_ordering(677) 00:17:32.499 fused_ordering(678) 00:17:32.499 fused_ordering(679) 00:17:32.499 fused_ordering(680) 00:17:32.499 fused_ordering(681) 00:17:32.499 fused_ordering(682) 00:17:32.499 fused_ordering(683) 00:17:32.499 fused_ordering(684) 00:17:32.499 fused_ordering(685) 00:17:32.499 fused_ordering(686) 00:17:32.499 fused_ordering(687) 00:17:32.499 fused_ordering(688) 00:17:32.499 fused_ordering(689) 00:17:32.499 fused_ordering(690) 00:17:32.499 fused_ordering(691) 00:17:32.499 fused_ordering(692) 00:17:32.499 fused_ordering(693) 00:17:32.499 fused_ordering(694) 00:17:32.499 fused_ordering(695) 00:17:32.499 fused_ordering(696) 00:17:32.499 fused_ordering(697) 00:17:32.499 fused_ordering(698) 00:17:32.499 fused_ordering(699) 00:17:32.499 fused_ordering(700) 00:17:32.499 fused_ordering(701) 00:17:32.499 fused_ordering(702) 00:17:32.499 fused_ordering(703) 00:17:32.499 fused_ordering(704) 00:17:32.499 fused_ordering(705) 00:17:32.499 fused_ordering(706) 00:17:32.499 fused_ordering(707) 00:17:32.499 fused_ordering(708) 00:17:32.499 fused_ordering(709) 00:17:32.499 fused_ordering(710) 00:17:32.499 fused_ordering(711) 00:17:32.499 fused_ordering(712) 00:17:32.499 fused_ordering(713) 00:17:32.499 fused_ordering(714) 00:17:32.499 fused_ordering(715) 00:17:32.499 fused_ordering(716) 00:17:32.499 fused_ordering(717) 00:17:32.499 fused_ordering(718) 00:17:32.499 fused_ordering(719) 00:17:32.499 fused_ordering(720) 00:17:32.499 fused_ordering(721) 00:17:32.499 fused_ordering(722) 00:17:32.499 fused_ordering(723) 00:17:32.499 fused_ordering(724) 00:17:32.499 fused_ordering(725) 00:17:32.499 fused_ordering(726) 00:17:32.499 fused_ordering(727) 00:17:32.499 fused_ordering(728) 00:17:32.499 fused_ordering(729) 00:17:32.499 fused_ordering(730) 00:17:32.499 fused_ordering(731) 00:17:32.499 fused_ordering(732) 00:17:32.499 fused_ordering(733) 00:17:32.499 fused_ordering(734) 00:17:32.499 fused_ordering(735) 00:17:32.499 fused_ordering(736) 00:17:32.499 fused_ordering(737) 00:17:32.499 fused_ordering(738) 00:17:32.499 fused_ordering(739) 00:17:32.499 fused_ordering(740) 00:17:32.499 fused_ordering(741) 00:17:32.499 fused_ordering(742) 00:17:32.499 fused_ordering(743) 00:17:32.499 fused_ordering(744) 00:17:32.499 fused_ordering(745) 00:17:32.499 fused_ordering(746) 00:17:32.499 fused_ordering(747) 00:17:32.499 fused_ordering(748) 00:17:32.499 fused_ordering(749) 00:17:32.499 fused_ordering(750) 00:17:32.499 fused_ordering(751) 00:17:32.499 fused_ordering(752) 00:17:32.499 fused_ordering(753) 00:17:32.499 fused_ordering(754) 00:17:32.499 fused_ordering(755) 00:17:32.499 fused_ordering(756) 00:17:32.499 fused_ordering(757) 00:17:32.499 fused_ordering(758) 00:17:32.499 fused_ordering(759) 00:17:32.499 fused_ordering(760) 00:17:32.499 fused_ordering(761) 00:17:32.499 fused_ordering(762) 00:17:32.499 fused_ordering(763) 00:17:32.499 fused_ordering(764) 00:17:32.499 fused_ordering(765) 00:17:32.499 fused_ordering(766) 00:17:32.499 fused_ordering(767) 00:17:32.499 fused_ordering(768) 00:17:32.499 fused_ordering(769) 00:17:32.499 fused_ordering(770) 00:17:32.499 fused_ordering(771) 00:17:32.499 fused_ordering(772) 00:17:32.499 fused_ordering(773) 00:17:32.499 fused_ordering(774) 00:17:32.499 fused_ordering(775) 00:17:32.499 fused_ordering(776) 00:17:32.499 fused_ordering(777) 00:17:32.499 fused_ordering(778) 00:17:32.499 fused_ordering(779) 00:17:32.499 fused_ordering(780) 00:17:32.499 fused_ordering(781) 00:17:32.499 fused_ordering(782) 00:17:32.499 fused_ordering(783) 00:17:32.499 fused_ordering(784) 00:17:32.499 fused_ordering(785) 00:17:32.499 fused_ordering(786) 00:17:32.499 fused_ordering(787) 00:17:32.499 fused_ordering(788) 00:17:32.499 fused_ordering(789) 00:17:32.499 fused_ordering(790) 00:17:32.499 fused_ordering(791) 00:17:32.499 fused_ordering(792) 00:17:32.499 fused_ordering(793) 00:17:32.499 fused_ordering(794) 00:17:32.499 fused_ordering(795) 00:17:32.499 fused_ordering(796) 00:17:32.499 fused_ordering(797) 00:17:32.499 fused_ordering(798) 00:17:32.499 fused_ordering(799) 00:17:32.499 fused_ordering(800) 00:17:32.499 fused_ordering(801) 00:17:32.499 fused_ordering(802) 00:17:32.499 fused_ordering(803) 00:17:32.499 fused_ordering(804) 00:17:32.499 fused_ordering(805) 00:17:32.499 fused_ordering(806) 00:17:32.499 fused_ordering(807) 00:17:32.499 fused_ordering(808) 00:17:32.499 fused_ordering(809) 00:17:32.499 fused_ordering(810) 00:17:32.499 fused_ordering(811) 00:17:32.499 fused_ordering(812) 00:17:32.499 fused_ordering(813) 00:17:32.499 fused_ordering(814) 00:17:32.499 fused_ordering(815) 00:17:32.499 fused_ordering(816) 00:17:32.499 fused_ordering(817) 00:17:32.499 fused_ordering(818) 00:17:32.499 fused_ordering(819) 00:17:32.499 fused_ordering(820) 00:17:33.064 fused_ordering(821) 00:17:33.064 fused_ordering(822) 00:17:33.064 fused_ordering(823) 00:17:33.064 fused_ordering(824) 00:17:33.064 fused_ordering(825) 00:17:33.064 fused_ordering(826) 00:17:33.064 fused_ordering(827) 00:17:33.064 fused_ordering(828) 00:17:33.064 fused_ordering(829) 00:17:33.064 fused_ordering(830) 00:17:33.064 fused_ordering(831) 00:17:33.064 fused_ordering(832) 00:17:33.064 fused_ordering(833) 00:17:33.064 fused_ordering(834) 00:17:33.064 fused_ordering(835) 00:17:33.064 fused_ordering(836) 00:17:33.064 fused_ordering(837) 00:17:33.064 fused_ordering(838) 00:17:33.064 fused_ordering(839) 00:17:33.064 fused_ordering(840) 00:17:33.064 fused_ordering(841) 00:17:33.064 fused_ordering(842) 00:17:33.064 fused_ordering(843) 00:17:33.064 fused_ordering(844) 00:17:33.064 fused_ordering(845) 00:17:33.064 fused_ordering(846) 00:17:33.064 fused_ordering(847) 00:17:33.064 fused_ordering(848) 00:17:33.064 fused_ordering(849) 00:17:33.064 fused_ordering(850) 00:17:33.064 fused_ordering(851) 00:17:33.064 fused_ordering(852) 00:17:33.064 fused_ordering(853) 00:17:33.064 fused_ordering(854) 00:17:33.064 fused_ordering(855) 00:17:33.064 fused_ordering(856) 00:17:33.064 fused_ordering(857) 00:17:33.064 fused_ordering(858) 00:17:33.064 fused_ordering(859) 00:17:33.064 fused_ordering(860) 00:17:33.064 fused_ordering(861) 00:17:33.064 fused_ordering(862) 00:17:33.064 fused_ordering(863) 00:17:33.064 fused_ordering(864) 00:17:33.064 fused_ordering(865) 00:17:33.064 fused_ordering(866) 00:17:33.064 fused_ordering(867) 00:17:33.064 fused_ordering(868) 00:17:33.064 fused_ordering(869) 00:17:33.064 fused_ordering(870) 00:17:33.064 fused_ordering(871) 00:17:33.064 fused_ordering(872) 00:17:33.064 fused_ordering(873) 00:17:33.064 fused_ordering(874) 00:17:33.064 fused_ordering(875) 00:17:33.064 fused_ordering(876) 00:17:33.064 fused_ordering(877) 00:17:33.064 fused_ordering(878) 00:17:33.064 fused_ordering(879) 00:17:33.064 fused_ordering(880) 00:17:33.064 fused_ordering(881) 00:17:33.064 fused_ordering(882) 00:17:33.064 fused_ordering(883) 00:17:33.064 fused_ordering(884) 00:17:33.064 fused_ordering(885) 00:17:33.064 fused_ordering(886) 00:17:33.064 fused_ordering(887) 00:17:33.064 fused_ordering(888) 00:17:33.064 fused_ordering(889) 00:17:33.064 fused_ordering(890) 00:17:33.064 fused_ordering(891) 00:17:33.065 fused_ordering(892) 00:17:33.065 fused_ordering(893) 00:17:33.065 fused_ordering(894) 00:17:33.065 fused_ordering(895) 00:17:33.065 fused_ordering(896) 00:17:33.065 fused_ordering(897) 00:17:33.065 fused_ordering(898) 00:17:33.065 fused_ordering(899) 00:17:33.065 fused_ordering(900) 00:17:33.065 fused_ordering(901) 00:17:33.065 fused_ordering(902) 00:17:33.065 fused_ordering(903) 00:17:33.065 fused_ordering(904) 00:17:33.065 fused_ordering(905) 00:17:33.065 fused_ordering(906) 00:17:33.065 fused_ordering(907) 00:17:33.065 fused_ordering(908) 00:17:33.065 fused_ordering(909) 00:17:33.065 fused_ordering(910) 00:17:33.065 fused_ordering(911) 00:17:33.065 fused_ordering(912) 00:17:33.065 fused_ordering(913) 00:17:33.065 fused_ordering(914) 00:17:33.065 fused_ordering(915) 00:17:33.065 fused_ordering(916) 00:17:33.065 fused_ordering(917) 00:17:33.065 fused_ordering(918) 00:17:33.065 fused_ordering(919) 00:17:33.065 fused_ordering(920) 00:17:33.065 fused_ordering(921) 00:17:33.065 fused_ordering(922) 00:17:33.065 fused_ordering(923) 00:17:33.065 fused_ordering(924) 00:17:33.065 fused_ordering(925) 00:17:33.065 fused_ordering(926) 00:17:33.065 fused_ordering(927) 00:17:33.065 fused_ordering(928) 00:17:33.065 fused_ordering(929) 00:17:33.065 fused_ordering(930) 00:17:33.065 fused_ordering(931) 00:17:33.065 fused_ordering(932) 00:17:33.065 fused_ordering(933) 00:17:33.065 fused_ordering(934) 00:17:33.065 fused_ordering(935) 00:17:33.065 fused_ordering(936) 00:17:33.065 fused_ordering(937) 00:17:33.065 fused_ordering(938) 00:17:33.065 fused_ordering(939) 00:17:33.065 fused_ordering(940) 00:17:33.065 fused_ordering(941) 00:17:33.065 fused_ordering(942) 00:17:33.065 fused_ordering(943) 00:17:33.065 fused_ordering(944) 00:17:33.065 fused_ordering(945) 00:17:33.065 fused_ordering(946) 00:17:33.065 fused_ordering(947) 00:17:33.065 fused_ordering(948) 00:17:33.065 fused_ordering(949) 00:17:33.065 fused_ordering(950) 00:17:33.065 fused_ordering(951) 00:17:33.065 fused_ordering(952) 00:17:33.065 fused_ordering(953) 00:17:33.065 fused_ordering(954) 00:17:33.065 fused_ordering(955) 00:17:33.065 fused_ordering(956) 00:17:33.065 fused_ordering(957) 00:17:33.065 fused_ordering(958) 00:17:33.065 fused_ordering(959) 00:17:33.065 fused_ordering(960) 00:17:33.065 fused_ordering(961) 00:17:33.065 fused_ordering(962) 00:17:33.065 fused_ordering(963) 00:17:33.065 fused_ordering(964) 00:17:33.065 fused_ordering(965) 00:17:33.065 fused_ordering(966) 00:17:33.065 fused_ordering(967) 00:17:33.065 fused_ordering(968) 00:17:33.065 fused_ordering(969) 00:17:33.065 fused_ordering(970) 00:17:33.065 fused_ordering(971) 00:17:33.065 fused_ordering(972) 00:17:33.065 fused_ordering(973) 00:17:33.065 fused_ordering(974) 00:17:33.065 fused_ordering(975) 00:17:33.065 fused_ordering(976) 00:17:33.065 fused_ordering(977) 00:17:33.065 fused_ordering(978) 00:17:33.065 fused_ordering(979) 00:17:33.065 fused_ordering(980) 00:17:33.065 fused_ordering(981) 00:17:33.065 fused_ordering(982) 00:17:33.065 fused_ordering(983) 00:17:33.065 fused_ordering(984) 00:17:33.065 fused_ordering(985) 00:17:33.065 fused_ordering(986) 00:17:33.065 fused_ordering(987) 00:17:33.065 fused_ordering(988) 00:17:33.065 fused_ordering(989) 00:17:33.065 fused_ordering(990) 00:17:33.065 fused_ordering(991) 00:17:33.065 fused_ordering(992) 00:17:33.065 fused_ordering(993) 00:17:33.065 fused_ordering(994) 00:17:33.065 fused_ordering(995) 00:17:33.065 fused_ordering(996) 00:17:33.065 fused_ordering(997) 00:17:33.065 fused_ordering(998) 00:17:33.065 fused_ordering(999) 00:17:33.065 fused_ordering(1000) 00:17:33.065 fused_ordering(1001) 00:17:33.065 fused_ordering(1002) 00:17:33.065 fused_ordering(1003) 00:17:33.065 fused_ordering(1004) 00:17:33.065 fused_ordering(1005) 00:17:33.065 fused_ordering(1006) 00:17:33.065 fused_ordering(1007) 00:17:33.065 fused_ordering(1008) 00:17:33.065 fused_ordering(1009) 00:17:33.065 fused_ordering(1010) 00:17:33.065 fused_ordering(1011) 00:17:33.065 fused_ordering(1012) 00:17:33.065 fused_ordering(1013) 00:17:33.065 fused_ordering(1014) 00:17:33.065 fused_ordering(1015) 00:17:33.065 fused_ordering(1016) 00:17:33.065 fused_ordering(1017) 00:17:33.065 fused_ordering(1018) 00:17:33.065 fused_ordering(1019) 00:17:33.065 fused_ordering(1020) 00:17:33.065 fused_ordering(1021) 00:17:33.065 fused_ordering(1022) 00:17:33.065 fused_ordering(1023) 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.065 rmmod nvme_tcp 00:17:33.065 rmmod nvme_fabrics 00:17:33.065 rmmod nvme_keyring 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 882960 ']' 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 882960 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 882960 ']' 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 882960 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882960 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882960' 00:17:33.065 killing process with pid 882960 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 882960 00:17:33.065 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 882960 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.324 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:35.858 00:17:35.858 real 0m7.832s 00:17:35.858 user 0m5.347s 00:17:35.858 sys 0m3.440s 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:35.858 ************************************ 00:17:35.858 END TEST nvmf_fused_ordering 00:17:35.858 ************************************ 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.858 ************************************ 00:17:35.858 START TEST nvmf_ns_masking 00:17:35.858 ************************************ 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:35.858 * Looking for test storage... 00:17:35.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.858 --rc genhtml_branch_coverage=1 00:17:35.858 --rc genhtml_function_coverage=1 00:17:35.858 --rc genhtml_legend=1 00:17:35.858 --rc geninfo_all_blocks=1 00:17:35.858 --rc geninfo_unexecuted_blocks=1 00:17:35.858 00:17:35.858 ' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.858 --rc genhtml_branch_coverage=1 00:17:35.858 --rc genhtml_function_coverage=1 00:17:35.858 --rc genhtml_legend=1 00:17:35.858 --rc geninfo_all_blocks=1 00:17:35.858 --rc geninfo_unexecuted_blocks=1 00:17:35.858 00:17:35.858 ' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.858 --rc genhtml_branch_coverage=1 00:17:35.858 --rc genhtml_function_coverage=1 00:17:35.858 --rc genhtml_legend=1 00:17:35.858 --rc geninfo_all_blocks=1 00:17:35.858 --rc geninfo_unexecuted_blocks=1 00:17:35.858 00:17:35.858 ' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.858 --rc genhtml_branch_coverage=1 00:17:35.858 --rc genhtml_function_coverage=1 00:17:35.858 --rc genhtml_legend=1 00:17:35.858 --rc geninfo_all_blocks=1 00:17:35.858 --rc geninfo_unexecuted_blocks=1 00:17:35.858 00:17:35.858 ' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.858 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:35.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8d4e8404-06ce-40fc-bccd-47f5f2f1a5fa 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7013b843-ae59-4b80-bfcd-51e0b9bcc03a 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=16217216-c816-4795-8948-3c62bf5cfd33 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:35.859 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:38.389 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:17:38.390 00:17:38.390 --- 10.0.0.2 ping statistics --- 00:17:38.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.390 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:17:38.390 00:17:38.390 --- 10.0.0.1 ping statistics --- 00:17:38.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.390 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=885320 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 885320 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 885320 ']' 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.390 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.390 [2024-10-01 01:35:17.832731] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:38.390 [2024-10-01 01:35:17.832803] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.390 [2024-10-01 01:35:17.901684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.390 [2024-10-01 01:35:17.995845] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.390 [2024-10-01 01:35:17.995906] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.390 [2024-10-01 01:35:17.995923] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.390 [2024-10-01 01:35:17.995936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.390 [2024-10-01 01:35:17.995947] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.390 [2024-10-01 01:35:17.995977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.390 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.390 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:38.390 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:38.390 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.391 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:38.391 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.391 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:38.648 [2024-10-01 01:35:18.378167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.648 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:38.648 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:38.648 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:38.907 Malloc1 00:17:38.907 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:39.472 Malloc2 00:17:39.472 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:39.730 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:39.987 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.244 [2024-10-01 01:35:19.921105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.244 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:40.244 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 16217216-c816-4795-8948-3c62bf5cfd33 -a 10.0.0.2 -s 4420 -i 4 00:17:40.244 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.244 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:40.244 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.244 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:40.244 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:42.770 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.771 [ 0]:0x1 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caf8d88a7c545f5b1d79424d7f10011 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caf8d88a7c545f5b1d79424d7f10011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.771 [ 0]:0x1 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caf8d88a7c545f5b1d79424d7f10011 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caf8d88a7c545f5b1d79424d7f10011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.771 [ 1]:0x2 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:42.771 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.029 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.286 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:43.545 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:43.545 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 16217216-c816-4795-8948-3c62bf5cfd33 -a 10.0.0.2 -s 4420 -i 4 00:17:43.825 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:43.825 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:43.825 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.825 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:43.825 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:43.825 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:45.747 [ 0]:0x2 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.747 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.312 [ 0]:0x1 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caf8d88a7c545f5b1d79424d7f10011 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caf8d88a7c545f5b1d79424d7f10011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.312 [ 1]:0x2 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.312 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.570 [ 0]:0x2 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.570 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 16217216-c816-4795-8948-3c62bf5cfd33 -a 10.0.0.2 -s 4420 -i 4 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:47.135 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:49.030 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.288 [ 0]:0x1 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caf8d88a7c545f5b1d79424d7f10011 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caf8d88a7c545f5b1d79424d7f10011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.288 [ 1]:0x2 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.288 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.288 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:49.288 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.288 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:49.546 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.547 [ 0]:0x2 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.547 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:49.804 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:50.062 [2024-10-01 01:35:29.714785] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:50.062 request: 00:17:50.062 { 00:17:50.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.062 "nsid": 2, 00:17:50.062 "host": "nqn.2016-06.io.spdk:host1", 00:17:50.062 "method": "nvmf_ns_remove_host", 00:17:50.062 "req_id": 1 00:17:50.062 } 00:17:50.062 Got JSON-RPC error response 00:17:50.062 response: 00:17:50.062 { 00:17:50.062 "code": -32602, 00:17:50.062 "message": "Invalid parameters" 00:17:50.062 } 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.062 [ 0]:0x2 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.062 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82373a995a214895baad710e097260b4 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82373a995a214895baad710e097260b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=886827 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 886827 /var/tmp/host.sock 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 886827 ']' 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:50.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.063 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 [2024-10-01 01:35:29.929094] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:50.323 [2024-10-01 01:35:29.929181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886827 ] 00:17:50.323 [2024-10-01 01:35:30.000277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.323 [2024-10-01 01:35:30.100027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.581 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.581 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:50.581 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.840 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:51.408 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8d4e8404-06ce-40fc-bccd-47f5f2f1a5fa 00:17:51.408 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:51.408 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8D4E840406CE40FCBCCD47F5F2F1A5FA -i 00:17:51.667 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7013b843-ae59-4b80-bfcd-51e0b9bcc03a 00:17:51.667 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:17:51.667 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7013B843AE594B80BFCD51E0B9BCC03A -i 00:17:51.924 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:52.182 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:52.440 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:52.440 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:53.009 nvme0n1 00:17:53.009 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:53.009 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:53.267 nvme1n2 00:17:53.526 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:53.526 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:53.526 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:53.526 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:53.526 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:53.785 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:53.785 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:53.785 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:53.785 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:54.043 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8d4e8404-06ce-40fc-bccd-47f5f2f1a5fa == \8\d\4\e\8\4\0\4\-\0\6\c\e\-\4\0\f\c\-\b\c\c\d\-\4\7\f\5\f\2\f\1\a\5\f\a ]] 00:17:54.043 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:54.043 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:54.043 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7013b843-ae59-4b80-bfcd-51e0b9bcc03a == \7\0\1\3\b\8\4\3\-\a\e\5\9\-\4\b\8\0\-\b\f\c\d\-\5\1\e\0\b\9\b\c\c\0\3\a ]] 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 886827 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 886827 ']' 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 886827 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886827 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886827' 00:17:54.302 killing process with pid 886827 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 886827 00:17:54.302 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 886827 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.870 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.870 rmmod nvme_tcp 00:17:54.870 rmmod nvme_fabrics 00:17:54.870 rmmod nvme_keyring 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 885320 ']' 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 885320 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 885320 ']' 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 885320 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 885320 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 885320' 00:17:55.130 killing process with pid 885320 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 885320 00:17:55.130 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 885320 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.390 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.299 00:17:57.299 real 0m21.863s 00:17:57.299 user 0m29.575s 00:17:57.299 sys 0m4.297s 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.299 ************************************ 00:17:57.299 END TEST nvmf_ns_masking 00:17:57.299 ************************************ 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.299 01:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.558 ************************************ 00:17:57.558 START TEST nvmf_nvme_cli 00:17:57.558 ************************************ 00:17:57.558 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:57.558 * Looking for test storage... 00:17:57.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.558 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:57.558 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:17:57.558 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:57.558 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:57.558 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:57.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.559 --rc genhtml_branch_coverage=1 00:17:57.559 --rc genhtml_function_coverage=1 00:17:57.559 --rc genhtml_legend=1 00:17:57.559 --rc geninfo_all_blocks=1 00:17:57.559 --rc geninfo_unexecuted_blocks=1 00:17:57.559 00:17:57.559 ' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:57.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.559 --rc genhtml_branch_coverage=1 00:17:57.559 --rc genhtml_function_coverage=1 00:17:57.559 --rc genhtml_legend=1 00:17:57.559 --rc geninfo_all_blocks=1 00:17:57.559 --rc geninfo_unexecuted_blocks=1 00:17:57.559 00:17:57.559 ' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:57.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.559 --rc genhtml_branch_coverage=1 00:17:57.559 --rc genhtml_function_coverage=1 00:17:57.559 --rc genhtml_legend=1 00:17:57.559 --rc geninfo_all_blocks=1 00:17:57.559 --rc geninfo_unexecuted_blocks=1 00:17:57.559 00:17:57.559 ' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:57.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.559 --rc genhtml_branch_coverage=1 00:17:57.559 --rc genhtml_function_coverage=1 00:17:57.559 --rc genhtml_legend=1 00:17:57.559 --rc geninfo_all_blocks=1 00:17:57.559 --rc geninfo_unexecuted_blocks=1 00:17:57.559 00:17:57.559 ' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:57.559 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.560 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.095 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.095 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:00.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:18:00.096 00:18:00.096 --- 10.0.0.2 ping statistics --- 00:18:00.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.096 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:18:00.096 00:18:00.096 --- 10.0.0.1 ping statistics --- 00:18:00.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.096 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=889439 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 889439 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 889439 ']' 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.096 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.096 [2024-10-01 01:35:39.714363] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:00.096 [2024-10-01 01:35:39.714448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.096 [2024-10-01 01:35:39.786699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.096 [2024-10-01 01:35:39.882327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.096 [2024-10-01 01:35:39.882399] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.096 [2024-10-01 01:35:39.882417] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.096 [2024-10-01 01:35:39.882431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.096 [2024-10-01 01:35:39.882443] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.096 [2024-10-01 01:35:39.882502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.096 [2024-10-01 01:35:39.882564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.096 [2024-10-01 01:35:39.882616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.096 [2024-10-01 01:35:39.882619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.354 [2024-10-01 01:35:40.046121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.354 Malloc0 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.354 Malloc1 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.354 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.355 [2024-10-01 01:35:40.129459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.355 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:00.614 00:18:00.614 Discovery Log Number of Records 2, Generation counter 2 00:18:00.615 =====Discovery Log Entry 0====== 00:18:00.615 trtype: tcp 00:18:00.615 adrfam: ipv4 00:18:00.615 subtype: current discovery subsystem 00:18:00.615 treq: not required 00:18:00.615 portid: 0 00:18:00.615 trsvcid: 4420 00:18:00.615 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:00.615 traddr: 10.0.0.2 00:18:00.615 eflags: explicit discovery connections, duplicate discovery information 00:18:00.615 sectype: none 00:18:00.615 =====Discovery Log Entry 1====== 00:18:00.615 trtype: tcp 00:18:00.615 adrfam: ipv4 00:18:00.615 subtype: nvme subsystem 00:18:00.615 treq: not required 00:18:00.615 portid: 0 00:18:00.615 trsvcid: 4420 00:18:00.615 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:00.615 traddr: 10.0.0.2 00:18:00.615 eflags: none 00:18:00.615 sectype: none 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:00.615 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.185 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:01.185 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:01.185 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.186 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:01.186 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:01.186 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.091 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:03.350 /dev/nvme0n2 ]] 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.350 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:03.608 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:03.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.869 rmmod nvme_tcp 00:18:03.869 rmmod nvme_fabrics 00:18:03.869 rmmod nvme_keyring 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 889439 ']' 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 889439 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 889439 ']' 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 889439 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 889439 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 889439' 00:18:03.869 killing process with pid 889439 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 889439 00:18:03.869 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 889439 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.128 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:06.670 00:18:06.670 real 0m8.796s 00:18:06.670 user 0m16.628s 00:18:06.670 sys 0m2.359s 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.670 ************************************ 00:18:06.670 END TEST nvmf_nvme_cli 00:18:06.670 ************************************ 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:06.670 01:35:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.670 ************************************ 00:18:06.670 START TEST nvmf_vfio_user 00:18:06.670 ************************************ 00:18:06.670 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:06.670 * Looking for test storage... 00:18:06.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.670 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:06.670 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:06.670 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:06.670 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:06.670 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.671 --rc genhtml_branch_coverage=1 00:18:06.671 --rc genhtml_function_coverage=1 00:18:06.671 --rc genhtml_legend=1 00:18:06.671 --rc geninfo_all_blocks=1 00:18:06.671 --rc geninfo_unexecuted_blocks=1 00:18:06.671 00:18:06.671 ' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.671 --rc genhtml_branch_coverage=1 00:18:06.671 --rc genhtml_function_coverage=1 00:18:06.671 --rc genhtml_legend=1 00:18:06.671 --rc geninfo_all_blocks=1 00:18:06.671 --rc geninfo_unexecuted_blocks=1 00:18:06.671 00:18:06.671 ' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.671 --rc genhtml_branch_coverage=1 00:18:06.671 --rc genhtml_function_coverage=1 00:18:06.671 --rc genhtml_legend=1 00:18:06.671 --rc geninfo_all_blocks=1 00:18:06.671 --rc geninfo_unexecuted_blocks=1 00:18:06.671 00:18:06.671 ' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:06.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.671 --rc genhtml_branch_coverage=1 00:18:06.671 --rc genhtml_function_coverage=1 00:18:06.671 --rc genhtml_legend=1 00:18:06.671 --rc geninfo_all_blocks=1 00:18:06.671 --rc geninfo_unexecuted_blocks=1 00:18:06.671 00:18:06.671 ' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:06.671 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=890374 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 890374' 00:18:06.672 Process pid: 890374 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 890374 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 890374 ']' 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:06.672 [2024-10-01 01:35:46.216476] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:06.672 [2024-10-01 01:35:46.216573] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.672 [2024-10-01 01:35:46.275572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.672 [2024-10-01 01:35:46.363396] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.672 [2024-10-01 01:35:46.363467] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.672 [2024-10-01 01:35:46.363483] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.672 [2024-10-01 01:35:46.363497] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.672 [2024-10-01 01:35:46.363508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.672 [2024-10-01 01:35:46.363591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.672 [2024-10-01 01:35:46.363660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.672 [2024-10-01 01:35:46.363755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.672 [2024-10-01 01:35:46.363758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:06.672 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:08.051 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:08.051 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:08.051 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:08.051 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:08.051 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:08.051 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:08.310 Malloc1 00:18:08.310 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:08.568 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:08.826 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:09.085 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:09.085 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:09.085 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:09.343 Malloc2 00:18:09.343 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:09.908 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:09.908 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:10.166 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:10.166 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:10.166 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:10.166 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:10.166 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:10.166 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:10.424 [2024-10-01 01:35:50.026665] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:10.425 [2024-10-01 01:35:50.026711] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890795 ] 00:18:10.425 [2024-10-01 01:35:50.061888] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:10.425 [2024-10-01 01:35:50.071593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.425 [2024-10-01 01:35:50.071627] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efc7ed24000 00:18:10.425 [2024-10-01 01:35:50.072567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.073561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.074566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.075575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.076573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.077582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.078585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.079594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.425 [2024-10-01 01:35:50.080606] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.425 [2024-10-01 01:35:50.080626] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efc7da1c000 00:18:10.425 [2024-10-01 01:35:50.081746] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.425 [2024-10-01 01:35:50.096769] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:10.425 [2024-10-01 01:35:50.096807] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:10.425 [2024-10-01 01:35:50.101753] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:10.425 [2024-10-01 01:35:50.101813] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:10.425 [2024-10-01 01:35:50.101938] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:10.425 [2024-10-01 01:35:50.101976] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:10.425 [2024-10-01 01:35:50.102013] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:10.425 [2024-10-01 01:35:50.104011] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:10.425 [2024-10-01 01:35:50.104032] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:10.425 [2024-10-01 01:35:50.104045] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:10.425 [2024-10-01 01:35:50.104771] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:10.425 [2024-10-01 01:35:50.104789] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:10.425 [2024-10-01 01:35:50.104801] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:10.425 [2024-10-01 01:35:50.105779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:10.425 [2024-10-01 01:35:50.105798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:10.425 [2024-10-01 01:35:50.106787] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:10.425 [2024-10-01 01:35:50.106805] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:10.425 [2024-10-01 01:35:50.106814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:10.425 [2024-10-01 01:35:50.106824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:10.425 [2024-10-01 01:35:50.106934] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:10.425 [2024-10-01 01:35:50.106941] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:10.425 [2024-10-01 01:35:50.106950] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:10.425 [2024-10-01 01:35:50.107792] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:10.425 [2024-10-01 01:35:50.108797] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:10.425 [2024-10-01 01:35:50.109805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:10.425 [2024-10-01 01:35:50.110799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:10.425 [2024-10-01 01:35:50.110912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:10.425 [2024-10-01 01:35:50.111815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:10.425 [2024-10-01 01:35:50.111833] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:10.425 [2024-10-01 01:35:50.111841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.111869] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:10.425 [2024-10-01 01:35:50.111882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.111907] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.425 [2024-10-01 01:35:50.111916] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.425 [2024-10-01 01:35:50.111922] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.111940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112043] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:10.425 [2024-10-01 01:35:50.112051] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:10.425 [2024-10-01 01:35:50.112059] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:10.425 [2024-10-01 01:35:50.112066] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:10.425 [2024-10-01 01:35:50.112075] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:10.425 [2024-10-01 01:35:50.112082] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:10.425 [2024-10-01 01:35:50.112091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.425 [2024-10-01 01:35:50.112175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.425 [2024-10-01 01:35:50.112187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.425 [2024-10-01 01:35:50.112199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.425 [2024-10-01 01:35:50.112208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112268] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:10.425 [2024-10-01 01:35:50.112293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112429] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112458] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:10.425 [2024-10-01 01:35:50.112466] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:10.425 [2024-10-01 01:35:50.112472] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.112482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112515] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:10.425 [2024-10-01 01:35:50.112535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112561] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.425 [2024-10-01 01:35:50.112569] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.425 [2024-10-01 01:35:50.112575] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.112585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112680] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.425 [2024-10-01 01:35:50.112703] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.425 [2024-10-01 01:35:50.112709] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.112719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112781] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112805] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:10.425 [2024-10-01 01:35:50.112813] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:10.425 [2024-10-01 01:35:50.112820] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:10.425 [2024-10-01 01:35:50.112845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.112939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.112976] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:10.425 [2024-10-01 01:35:50.112986] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:10.425 [2024-10-01 01:35:50.112992] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:10.425 [2024-10-01 01:35:50.113007] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:10.425 [2024-10-01 01:35:50.113029] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:10.425 [2024-10-01 01:35:50.113040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:10.425 [2024-10-01 01:35:50.113052] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:10.425 [2024-10-01 01:35:50.113061] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:10.425 [2024-10-01 01:35:50.113071] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.113080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.113092] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:10.425 [2024-10-01 01:35:50.113100] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.425 [2024-10-01 01:35:50.113106] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.113115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.113127] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:10.425 [2024-10-01 01:35:50.113135] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:10.425 [2024-10-01 01:35:50.113141] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.425 [2024-10-01 01:35:50.113150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:10.425 [2024-10-01 01:35:50.113162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.113183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.113202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:10.425 [2024-10-01 01:35:50.113214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:10.425 ===================================================== 00:18:10.425 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:10.425 ===================================================== 00:18:10.425 Controller Capabilities/Features 00:18:10.425 ================================ 00:18:10.425 Vendor ID: 4e58 00:18:10.425 Subsystem Vendor ID: 4e58 00:18:10.425 Serial Number: SPDK1 00:18:10.425 Model Number: SPDK bdev Controller 00:18:10.425 Firmware Version: 25.01 00:18:10.425 Recommended Arb Burst: 6 00:18:10.425 IEEE OUI Identifier: 8d 6b 50 00:18:10.425 Multi-path I/O 00:18:10.425 May have multiple subsystem ports: Yes 00:18:10.425 May have multiple controllers: Yes 00:18:10.425 Associated with SR-IOV VF: No 00:18:10.425 Max Data Transfer Size: 131072 00:18:10.425 Max Number of Namespaces: 32 00:18:10.425 Max Number of I/O Queues: 127 00:18:10.425 NVMe Specification Version (VS): 1.3 00:18:10.426 NVMe Specification Version (Identify): 1.3 00:18:10.426 Maximum Queue Entries: 256 00:18:10.426 Contiguous Queues Required: Yes 00:18:10.426 Arbitration Mechanisms Supported 00:18:10.426 Weighted Round Robin: Not Supported 00:18:10.426 Vendor Specific: Not Supported 00:18:10.426 Reset Timeout: 15000 ms 00:18:10.426 Doorbell Stride: 4 bytes 00:18:10.426 NVM Subsystem Reset: Not Supported 00:18:10.426 Command Sets Supported 00:18:10.426 NVM Command Set: Supported 00:18:10.426 Boot Partition: Not Supported 00:18:10.426 Memory Page Size Minimum: 4096 bytes 00:18:10.426 Memory Page Size Maximum: 4096 bytes 00:18:10.426 Persistent Memory Region: Not Supported 00:18:10.426 Optional Asynchronous Events Supported 00:18:10.426 Namespace Attribute Notices: Supported 00:18:10.426 Firmware Activation Notices: Not Supported 00:18:10.426 ANA Change Notices: Not Supported 00:18:10.426 PLE Aggregate Log Change Notices: Not Supported 00:18:10.426 LBA Status Info Alert Notices: Not Supported 00:18:10.426 EGE Aggregate Log Change Notices: Not Supported 00:18:10.426 Normal NVM Subsystem Shutdown event: Not Supported 00:18:10.426 Zone Descriptor Change Notices: Not Supported 00:18:10.426 Discovery Log Change Notices: Not Supported 00:18:10.426 Controller Attributes 00:18:10.426 128-bit Host Identifier: Supported 00:18:10.426 Non-Operational Permissive Mode: Not Supported 00:18:10.426 NVM Sets: Not Supported 00:18:10.426 Read Recovery Levels: Not Supported 00:18:10.426 Endurance Groups: Not Supported 00:18:10.426 Predictable Latency Mode: Not Supported 00:18:10.426 Traffic Based Keep ALive: Not Supported 00:18:10.426 Namespace Granularity: Not Supported 00:18:10.426 SQ Associations: Not Supported 00:18:10.426 UUID List: Not Supported 00:18:10.426 Multi-Domain Subsystem: Not Supported 00:18:10.426 Fixed Capacity Management: Not Supported 00:18:10.426 Variable Capacity Management: Not Supported 00:18:10.426 Delete Endurance Group: Not Supported 00:18:10.426 Delete NVM Set: Not Supported 00:18:10.426 Extended LBA Formats Supported: Not Supported 00:18:10.426 Flexible Data Placement Supported: Not Supported 00:18:10.426 00:18:10.426 Controller Memory Buffer Support 00:18:10.426 ================================ 00:18:10.426 Supported: No 00:18:10.426 00:18:10.426 Persistent Memory Region Support 00:18:10.426 ================================ 00:18:10.426 Supported: No 00:18:10.426 00:18:10.426 Admin Command Set Attributes 00:18:10.426 ============================ 00:18:10.426 Security Send/Receive: Not Supported 00:18:10.426 Format NVM: Not Supported 00:18:10.426 Firmware Activate/Download: Not Supported 00:18:10.426 Namespace Management: Not Supported 00:18:10.426 Device Self-Test: Not Supported 00:18:10.426 Directives: Not Supported 00:18:10.426 NVMe-MI: Not Supported 00:18:10.426 Virtualization Management: Not Supported 00:18:10.426 Doorbell Buffer Config: Not Supported 00:18:10.426 Get LBA Status Capability: Not Supported 00:18:10.426 Command & Feature Lockdown Capability: Not Supported 00:18:10.426 Abort Command Limit: 4 00:18:10.426 Async Event Request Limit: 4 00:18:10.426 Number of Firmware Slots: N/A 00:18:10.426 Firmware Slot 1 Read-Only: N/A 00:18:10.426 Firmware Activation Without Reset: N/A 00:18:10.426 Multiple Update Detection Support: N/A 00:18:10.426 Firmware Update Granularity: No Information Provided 00:18:10.426 Per-Namespace SMART Log: No 00:18:10.426 Asymmetric Namespace Access Log Page: Not Supported 00:18:10.426 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:10.426 Command Effects Log Page: Supported 00:18:10.426 Get Log Page Extended Data: Supported 00:18:10.426 Telemetry Log Pages: Not Supported 00:18:10.426 Persistent Event Log Pages: Not Supported 00:18:10.426 Supported Log Pages Log Page: May Support 00:18:10.426 Commands Supported & Effects Log Page: Not Supported 00:18:10.426 Feature Identifiers & Effects Log Page:May Support 00:18:10.426 NVMe-MI Commands & Effects Log Page: May Support 00:18:10.426 Data Area 4 for Telemetry Log: Not Supported 00:18:10.426 Error Log Page Entries Supported: 128 00:18:10.426 Keep Alive: Supported 00:18:10.426 Keep Alive Granularity: 10000 ms 00:18:10.426 00:18:10.426 NVM Command Set Attributes 00:18:10.426 ========================== 00:18:10.426 Submission Queue Entry Size 00:18:10.426 Max: 64 00:18:10.426 Min: 64 00:18:10.426 Completion Queue Entry Size 00:18:10.426 Max: 16 00:18:10.426 Min: 16 00:18:10.426 Number of Namespaces: 32 00:18:10.426 Compare Command: Supported 00:18:10.426 Write Uncorrectable Command: Not Supported 00:18:10.426 Dataset Management Command: Supported 00:18:10.426 Write Zeroes Command: Supported 00:18:10.426 Set Features Save Field: Not Supported 00:18:10.426 Reservations: Not Supported 00:18:10.426 Timestamp: Not Supported 00:18:10.426 Copy: Supported 00:18:10.426 Volatile Write Cache: Present 00:18:10.426 Atomic Write Unit (Normal): 1 00:18:10.426 Atomic Write Unit (PFail): 1 00:18:10.426 Atomic Compare & Write Unit: 1 00:18:10.426 Fused Compare & Write: Supported 00:18:10.426 Scatter-Gather List 00:18:10.426 SGL Command Set: Supported (Dword aligned) 00:18:10.426 SGL Keyed: Not Supported 00:18:10.426 SGL Bit Bucket Descriptor: Not Supported 00:18:10.426 SGL Metadata Pointer: Not Supported 00:18:10.426 Oversized SGL: Not Supported 00:18:10.426 SGL Metadata Address: Not Supported 00:18:10.426 SGL Offset: Not Supported 00:18:10.426 Transport SGL Data Block: Not Supported 00:18:10.426 Replay Protected Memory Block: Not Supported 00:18:10.426 00:18:10.426 Firmware Slot Information 00:18:10.426 ========================= 00:18:10.426 Active slot: 1 00:18:10.426 Slot 1 Firmware Revision: 25.01 00:18:10.426 00:18:10.426 00:18:10.426 Commands Supported and Effects 00:18:10.426 ============================== 00:18:10.426 Admin Commands 00:18:10.426 -------------- 00:18:10.426 Get Log Page (02h): Supported 00:18:10.426 Identify (06h): Supported 00:18:10.426 Abort (08h): Supported 00:18:10.426 Set Features (09h): Supported 00:18:10.426 Get Features (0Ah): Supported 00:18:10.426 Asynchronous Event Request (0Ch): Supported 00:18:10.426 Keep Alive (18h): Supported 00:18:10.426 I/O Commands 00:18:10.426 ------------ 00:18:10.426 Flush (00h): Supported LBA-Change 00:18:10.426 Write (01h): Supported LBA-Change 00:18:10.426 Read (02h): Supported 00:18:10.426 Compare (05h): Supported 00:18:10.426 Write Zeroes (08h): Supported LBA-Change 00:18:10.426 Dataset Management (09h): Supported LBA-Change 00:18:10.426 Copy (19h): Supported LBA-Change 00:18:10.426 00:18:10.426 Error Log 00:18:10.426 ========= 00:18:10.426 00:18:10.426 Arbitration 00:18:10.426 =========== 00:18:10.426 Arbitration Burst: 1 00:18:10.426 00:18:10.426 Power Management 00:18:10.426 ================ 00:18:10.426 Number of Power States: 1 00:18:10.426 Current Power State: Power State #0 00:18:10.426 Power State #0: 00:18:10.426 Max Power: 0.00 W 00:18:10.426 Non-Operational State: Operational 00:18:10.426 Entry Latency: Not Reported 00:18:10.426 Exit Latency: Not Reported 00:18:10.426 Relative Read Throughput: 0 00:18:10.426 Relative Read Latency: 0 00:18:10.426 Relative Write Throughput: 0 00:18:10.426 Relative Write Latency: 0 00:18:10.426 Idle Power: Not Reported 00:18:10.426 Active Power: Not Reported 00:18:10.426 Non-Operational Permissive Mode: Not Supported 00:18:10.426 00:18:10.426 Health Information 00:18:10.426 ================== 00:18:10.426 Critical Warnings: 00:18:10.426 Available Spare Space: OK 00:18:10.426 Temperature: OK 00:18:10.426 Device Reliability: OK 00:18:10.426 Read Only: No 00:18:10.426 Volatile Memory Backup: OK 00:18:10.426 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:10.426 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:10.426 Available Spare: 0% 00:18:10.426 Available Sp[2024-10-01 01:35:50.113427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:10.426 [2024-10-01 01:35:50.113445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:10.426 [2024-10-01 01:35:50.113491] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:10.426 [2024-10-01 01:35:50.113508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.426 [2024-10-01 01:35:50.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.426 [2024-10-01 01:35:50.113529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.426 [2024-10-01 01:35:50.113539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.426 [2024-10-01 01:35:50.116012] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:10.426 [2024-10-01 01:35:50.116034] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:10.426 [2024-10-01 01:35:50.116840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.426 [2024-10-01 01:35:50.116926] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:10.426 [2024-10-01 01:35:50.116939] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:10.426 [2024-10-01 01:35:50.117848] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:10.426 [2024-10-01 01:35:50.117876] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:10.426 [2024-10-01 01:35:50.117942] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:10.426 [2024-10-01 01:35:50.121009] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.426 are Threshold: 0% 00:18:10.426 Life Percentage Used: 0% 00:18:10.426 Data Units Read: 0 00:18:10.426 Data Units Written: 0 00:18:10.426 Host Read Commands: 0 00:18:10.426 Host Write Commands: 0 00:18:10.426 Controller Busy Time: 0 minutes 00:18:10.426 Power Cycles: 0 00:18:10.426 Power On Hours: 0 hours 00:18:10.426 Unsafe Shutdowns: 0 00:18:10.426 Unrecoverable Media Errors: 0 00:18:10.426 Lifetime Error Log Entries: 0 00:18:10.426 Warning Temperature Time: 0 minutes 00:18:10.426 Critical Temperature Time: 0 minutes 00:18:10.426 00:18:10.426 Number of Queues 00:18:10.426 ================ 00:18:10.426 Number of I/O Submission Queues: 127 00:18:10.426 Number of I/O Completion Queues: 127 00:18:10.426 00:18:10.426 Active Namespaces 00:18:10.426 ================= 00:18:10.426 Namespace ID:1 00:18:10.426 Error Recovery Timeout: Unlimited 00:18:10.426 Command Set Identifier: NVM (00h) 00:18:10.426 Deallocate: Supported 00:18:10.426 Deallocated/Unwritten Error: Not Supported 00:18:10.426 Deallocated Read Value: Unknown 00:18:10.426 Deallocate in Write Zeroes: Not Supported 00:18:10.426 Deallocated Guard Field: 0xFFFF 00:18:10.426 Flush: Supported 00:18:10.426 Reservation: Supported 00:18:10.426 Namespace Sharing Capabilities: Multiple Controllers 00:18:10.426 Size (in LBAs): 131072 (0GiB) 00:18:10.426 Capacity (in LBAs): 131072 (0GiB) 00:18:10.426 Utilization (in LBAs): 131072 (0GiB) 00:18:10.426 NGUID: 09A2C1F7ED234CCFAAF5091157070105 00:18:10.426 UUID: 09a2c1f7-ed23-4ccf-aaf5-091157070105 00:18:10.426 Thin Provisioning: Not Supported 00:18:10.426 Per-NS Atomic Units: Yes 00:18:10.426 Atomic Boundary Size (Normal): 0 00:18:10.426 Atomic Boundary Size (PFail): 0 00:18:10.426 Atomic Boundary Offset: 0 00:18:10.426 Maximum Single Source Range Length: 65535 00:18:10.426 Maximum Copy Length: 65535 00:18:10.426 Maximum Source Range Count: 1 00:18:10.426 NGUID/EUI64 Never Reused: No 00:18:10.426 Namespace Write Protected: No 00:18:10.426 Number of LBA Formats: 1 00:18:10.426 Current LBA Format: LBA Format #00 00:18:10.426 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:10.426 00:18:10.426 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:10.684 [2024-10-01 01:35:50.361852] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:15.960 Initializing NVMe Controllers 00:18:15.960 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:15.960 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:15.960 Initialization complete. Launching workers. 00:18:15.960 ======================================================== 00:18:15.960 Latency(us) 00:18:15.960 Device Information : IOPS MiB/s Average min max 00:18:15.960 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33657.34 131.47 3802.25 1172.39 8532.25 00:18:15.960 ======================================================== 00:18:15.960 Total : 33657.34 131.47 3802.25 1172.39 8532.25 00:18:15.960 00:18:15.960 [2024-10-01 01:35:55.382237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:15.960 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:15.960 [2024-10-01 01:35:55.615396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:21.235 Initializing NVMe Controllers 00:18:21.235 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:21.235 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:21.235 Initialization complete. Launching workers. 00:18:21.235 ======================================================== 00:18:21.235 Latency(us) 00:18:21.235 Device Information : IOPS MiB/s Average min max 00:18:21.235 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.86 6996.22 10970.45 00:18:21.235 ======================================================== 00:18:21.235 Total : 16051.20 62.70 7982.86 6996.22 10970.45 00:18:21.235 00:18:21.235 [2024-10-01 01:36:00.656211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:21.235 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:21.235 [2024-10-01 01:36:00.866319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.560 [2024-10-01 01:36:05.944426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.560 Initializing NVMe Controllers 00:18:26.560 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.560 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.560 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:26.560 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:26.560 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:26.560 Initialization complete. Launching workers. 00:18:26.560 Starting thread on core 2 00:18:26.560 Starting thread on core 3 00:18:26.560 Starting thread on core 1 00:18:26.560 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:26.560 [2024-10-01 01:36:06.246482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.846 [2024-10-01 01:36:09.309092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.846 Initializing NVMe Controllers 00:18:29.846 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.846 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:29.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:29.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:29.846 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:29.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:29.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:29.846 Initialization complete. Launching workers. 00:18:29.846 Starting thread on core 1 with urgent priority queue 00:18:29.846 Starting thread on core 2 with urgent priority queue 00:18:29.846 Starting thread on core 3 with urgent priority queue 00:18:29.846 Starting thread on core 0 with urgent priority queue 00:18:29.846 SPDK bdev Controller (SPDK1 ) core 0: 5419.00 IO/s 18.45 secs/100000 ios 00:18:29.846 SPDK bdev Controller (SPDK1 ) core 1: 5860.33 IO/s 17.06 secs/100000 ios 00:18:29.846 SPDK bdev Controller (SPDK1 ) core 2: 5742.00 IO/s 17.42 secs/100000 ios 00:18:29.846 SPDK bdev Controller (SPDK1 ) core 3: 5882.67 IO/s 17.00 secs/100000 ios 00:18:29.846 ======================================================== 00:18:29.846 00:18:29.846 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:29.846 [2024-10-01 01:36:09.598859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.846 Initializing NVMe Controllers 00:18:29.846 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.846 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:29.846 Namespace ID: 1 size: 0GB 00:18:29.846 Initialization complete. 00:18:29.846 INFO: using host memory buffer for IO 00:18:29.846 Hello world! 00:18:29.846 [2024-10-01 01:36:09.631514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.846 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:30.107 [2024-10-01 01:36:09.931502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.491 Initializing NVMe Controllers 00:18:31.491 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.491 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.491 Initialization complete. Launching workers. 00:18:31.491 submit (in ns) avg, min, max = 9966.7, 3532.2, 5997221.1 00:18:31.491 complete (in ns) avg, min, max = 20814.9, 2067.8, 4020273.3 00:18:31.491 00:18:31.491 Submit histogram 00:18:31.491 ================ 00:18:31.491 Range in us Cumulative Count 00:18:31.491 3.532 - 3.556: 0.5326% ( 70) 00:18:31.491 3.556 - 3.579: 1.5596% ( 135) 00:18:31.491 3.579 - 3.603: 4.6333% ( 404) 00:18:31.491 3.603 - 3.627: 9.6013% ( 653) 00:18:31.491 3.627 - 3.650: 18.2669% ( 1139) 00:18:31.491 3.650 - 3.674: 26.3390% ( 1061) 00:18:31.491 3.674 - 3.698: 33.8481% ( 987) 00:18:31.491 3.698 - 3.721: 40.6954% ( 900) 00:18:31.491 3.721 - 3.745: 46.7133% ( 791) 00:18:31.491 3.745 - 3.769: 51.7498% ( 662) 00:18:31.491 3.769 - 3.793: 56.4288% ( 615) 00:18:31.491 3.793 - 3.816: 60.3013% ( 509) 00:18:31.491 3.816 - 3.840: 64.0140% ( 488) 00:18:31.491 3.840 - 3.864: 67.9778% ( 521) 00:18:31.491 3.864 - 3.887: 72.0024% ( 529) 00:18:31.491 3.887 - 3.911: 75.7684% ( 495) 00:18:31.491 3.911 - 3.935: 79.2453% ( 457) 00:18:31.491 3.935 - 3.959: 82.2048% ( 389) 00:18:31.491 3.959 - 3.982: 84.6318% ( 319) 00:18:31.491 3.982 - 4.006: 86.7468% ( 278) 00:18:31.491 4.006 - 4.030: 88.5575% ( 238) 00:18:31.491 4.030 - 4.053: 89.9194% ( 179) 00:18:31.491 4.053 - 4.077: 91.1442% ( 161) 00:18:31.491 4.077 - 4.101: 91.9735% ( 109) 00:18:31.491 4.101 - 4.124: 92.8941% ( 121) 00:18:31.491 4.124 - 4.148: 93.9060% ( 133) 00:18:31.491 4.148 - 4.172: 94.6668% ( 100) 00:18:31.491 4.172 - 4.196: 95.2830% ( 81) 00:18:31.491 4.196 - 4.219: 95.5949% ( 41) 00:18:31.491 4.219 - 4.243: 95.8993% ( 40) 00:18:31.491 4.243 - 4.267: 96.0895% ( 25) 00:18:31.491 4.267 - 4.290: 96.2873% ( 26) 00:18:31.491 4.290 - 4.314: 96.4090% ( 16) 00:18:31.491 4.314 - 4.338: 96.5460% ( 18) 00:18:31.491 4.338 - 4.361: 96.6525% ( 14) 00:18:31.491 4.361 - 4.385: 96.7514% ( 13) 00:18:31.491 4.385 - 4.409: 96.8122% ( 8) 00:18:31.491 4.409 - 4.433: 96.8959% ( 11) 00:18:31.491 4.433 - 4.456: 96.9644% ( 9) 00:18:31.491 4.456 - 4.480: 97.0024% ( 5) 00:18:31.491 4.480 - 4.504: 97.0329% ( 4) 00:18:31.491 4.504 - 4.527: 97.0481% ( 2) 00:18:31.491 4.527 - 4.551: 97.0785% ( 4) 00:18:31.491 4.551 - 4.575: 97.1242% ( 6) 00:18:31.491 4.575 - 4.599: 97.1698% ( 6) 00:18:31.491 4.599 - 4.622: 97.2002% ( 4) 00:18:31.491 4.622 - 4.646: 97.2079% ( 1) 00:18:31.491 4.646 - 4.670: 97.2155% ( 1) 00:18:31.491 4.670 - 4.693: 97.2383% ( 3) 00:18:31.491 4.693 - 4.717: 97.2687% ( 4) 00:18:31.491 4.717 - 4.741: 97.2763% ( 1) 00:18:31.491 4.741 - 4.764: 97.2991% ( 3) 00:18:31.491 4.764 - 4.788: 97.3220% ( 3) 00:18:31.491 4.788 - 4.812: 97.3676% ( 6) 00:18:31.491 4.812 - 4.836: 97.4133% ( 6) 00:18:31.491 4.836 - 4.859: 97.4665% ( 7) 00:18:31.491 4.859 - 4.883: 97.5198% ( 7) 00:18:31.491 4.883 - 4.907: 97.5502% ( 4) 00:18:31.491 4.907 - 4.930: 97.5883% ( 5) 00:18:31.491 4.930 - 4.954: 97.5959% ( 1) 00:18:31.491 4.954 - 4.978: 97.6339% ( 5) 00:18:31.491 4.978 - 5.001: 97.6491% ( 2) 00:18:31.491 5.001 - 5.025: 97.6719% ( 3) 00:18:31.491 5.025 - 5.049: 97.6948% ( 3) 00:18:31.491 5.049 - 5.073: 97.7176% ( 3) 00:18:31.491 5.073 - 5.096: 97.7480% ( 4) 00:18:31.491 5.096 - 5.120: 97.7708% ( 3) 00:18:31.491 5.120 - 5.144: 97.8013% ( 4) 00:18:31.491 5.144 - 5.167: 97.8089% ( 1) 00:18:31.491 5.167 - 5.191: 97.8165% ( 1) 00:18:31.491 5.215 - 5.239: 97.8241% ( 1) 00:18:31.491 5.262 - 5.286: 97.8317% ( 1) 00:18:31.492 5.310 - 5.333: 97.8469% ( 2) 00:18:31.492 5.333 - 5.357: 97.8545% ( 1) 00:18:31.492 5.381 - 5.404: 97.8621% ( 1) 00:18:31.492 5.428 - 5.452: 97.8698% ( 1) 00:18:31.492 5.452 - 5.476: 97.8850% ( 2) 00:18:31.492 5.499 - 5.523: 97.9002% ( 2) 00:18:31.492 5.594 - 5.618: 97.9078% ( 1) 00:18:31.492 5.736 - 5.760: 97.9154% ( 1) 00:18:31.492 6.116 - 6.163: 97.9230% ( 1) 00:18:31.492 6.258 - 6.305: 97.9382% ( 2) 00:18:31.492 6.305 - 6.353: 97.9458% ( 1) 00:18:31.492 6.400 - 6.447: 97.9534% ( 1) 00:18:31.492 6.637 - 6.684: 97.9687% ( 2) 00:18:31.492 6.921 - 6.969: 97.9763% ( 1) 00:18:31.492 7.016 - 7.064: 97.9839% ( 1) 00:18:31.492 7.111 - 7.159: 97.9915% ( 1) 00:18:31.492 7.206 - 7.253: 97.9991% ( 1) 00:18:31.492 7.253 - 7.301: 98.0067% ( 1) 00:18:31.492 7.348 - 7.396: 98.0143% ( 1) 00:18:31.492 7.443 - 7.490: 98.0219% ( 1) 00:18:31.492 7.538 - 7.585: 98.0371% ( 2) 00:18:31.492 7.585 - 7.633: 98.0676% ( 4) 00:18:31.492 7.633 - 7.680: 98.0828% ( 2) 00:18:31.492 7.680 - 7.727: 98.0980% ( 2) 00:18:31.492 7.775 - 7.822: 98.1056% ( 1) 00:18:31.492 7.822 - 7.870: 98.1284% ( 3) 00:18:31.492 7.870 - 7.917: 98.1512% ( 3) 00:18:31.492 7.964 - 8.012: 98.1589% ( 1) 00:18:31.492 8.012 - 8.059: 98.1817% ( 3) 00:18:31.492 8.059 - 8.107: 98.1893% ( 1) 00:18:31.492 8.107 - 8.154: 98.2121% ( 3) 00:18:31.492 8.154 - 8.201: 98.2197% ( 1) 00:18:31.492 8.201 - 8.249: 98.2349% ( 2) 00:18:31.492 8.249 - 8.296: 98.2425% ( 1) 00:18:31.492 8.296 - 8.344: 98.2502% ( 1) 00:18:31.492 8.344 - 8.391: 98.2578% ( 1) 00:18:31.492 8.391 - 8.439: 98.2654% ( 1) 00:18:31.492 8.486 - 8.533: 98.2730% ( 1) 00:18:31.492 8.533 - 8.581: 98.2882% ( 2) 00:18:31.492 8.581 - 8.628: 98.2958% ( 1) 00:18:31.492 8.628 - 8.676: 98.3034% ( 1) 00:18:31.492 8.723 - 8.770: 98.3110% ( 1) 00:18:31.492 8.770 - 8.818: 98.3186% ( 1) 00:18:31.492 8.818 - 8.865: 98.3338% ( 2) 00:18:31.492 9.007 - 9.055: 98.3491% ( 2) 00:18:31.492 9.055 - 9.102: 98.3567% ( 1) 00:18:31.492 9.150 - 9.197: 98.3643% ( 1) 00:18:31.492 9.197 - 9.244: 98.3719% ( 1) 00:18:31.492 9.576 - 9.624: 98.3871% ( 2) 00:18:31.492 9.671 - 9.719: 98.3947% ( 1) 00:18:31.492 9.719 - 9.766: 98.4023% ( 1) 00:18:31.492 9.813 - 9.861: 98.4099% ( 1) 00:18:31.492 9.861 - 9.908: 98.4175% ( 1) 00:18:31.492 10.003 - 10.050: 98.4251% ( 1) 00:18:31.492 10.524 - 10.572: 98.4327% ( 1) 00:18:31.492 10.619 - 10.667: 98.4404% ( 1) 00:18:31.492 11.804 - 11.852: 98.4480% ( 1) 00:18:31.492 11.852 - 11.899: 98.4556% ( 1) 00:18:31.492 11.994 - 12.041: 98.4632% ( 1) 00:18:31.492 12.231 - 12.326: 98.4708% ( 1) 00:18:31.492 12.610 - 12.705: 98.4784% ( 1) 00:18:31.492 12.705 - 12.800: 98.4860% ( 1) 00:18:31.492 12.800 - 12.895: 98.5012% ( 2) 00:18:31.492 12.990 - 13.084: 98.5088% ( 1) 00:18:31.492 13.369 - 13.464: 98.5164% ( 1) 00:18:31.492 13.464 - 13.559: 98.5240% ( 1) 00:18:31.492 13.559 - 13.653: 98.5393% ( 2) 00:18:31.492 14.127 - 14.222: 98.5469% ( 1) 00:18:31.492 14.317 - 14.412: 98.5545% ( 1) 00:18:31.492 14.601 - 14.696: 98.5621% ( 1) 00:18:31.492 15.929 - 16.024: 98.5697% ( 1) 00:18:31.492 17.351 - 17.446: 98.6153% ( 6) 00:18:31.492 17.446 - 17.541: 98.6534% ( 5) 00:18:31.492 17.541 - 17.636: 98.6762% ( 3) 00:18:31.492 17.636 - 17.730: 98.7295% ( 7) 00:18:31.492 17.730 - 17.825: 98.7903% ( 8) 00:18:31.492 17.825 - 17.920: 98.8208% ( 4) 00:18:31.492 17.920 - 18.015: 98.8816% ( 8) 00:18:31.492 18.015 - 18.110: 98.9805% ( 13) 00:18:31.492 18.110 - 18.204: 99.0338% ( 7) 00:18:31.492 18.204 - 18.299: 99.1935% ( 21) 00:18:31.492 18.299 - 18.394: 99.2772% ( 11) 00:18:31.492 18.394 - 18.489: 99.3229% ( 6) 00:18:31.492 18.489 - 18.584: 99.4066% ( 11) 00:18:31.492 18.584 - 18.679: 99.4522% ( 6) 00:18:31.492 18.679 - 18.773: 99.5207% ( 9) 00:18:31.492 18.773 - 18.868: 99.5968% ( 10) 00:18:31.492 18.868 - 18.963: 99.6272% ( 4) 00:18:31.492 18.963 - 19.058: 99.6729% ( 6) 00:18:31.492 19.058 - 19.153: 99.6881% ( 2) 00:18:31.492 19.247 - 19.342: 99.6957% ( 1) 00:18:31.492 19.342 - 19.437: 99.7033% ( 1) 00:18:31.492 19.437 - 19.532: 99.7109% ( 1) 00:18:31.492 19.532 - 19.627: 99.7337% ( 3) 00:18:31.492 19.627 - 19.721: 99.7413% ( 1) 00:18:31.492 19.721 - 19.816: 99.7642% ( 3) 00:18:31.492 19.816 - 19.911: 99.7794% ( 2) 00:18:31.492 20.006 - 20.101: 99.7870% ( 1) 00:18:31.492 20.101 - 20.196: 99.7946% ( 1) 00:18:31.492 20.480 - 20.575: 99.8022% ( 1) 00:18:31.492 20.954 - 21.049: 99.8098% ( 1) 00:18:31.492 22.471 - 22.566: 99.8174% ( 1) 00:18:31.492 23.135 - 23.230: 99.8250% ( 1) 00:18:31.492 26.169 - 26.359: 99.8326% ( 1) 00:18:31.492 26.738 - 26.927: 99.8402% ( 1) 00:18:31.492 27.686 - 27.876: 99.8478% ( 1) 00:18:31.492 28.634 - 28.824: 99.8554% ( 1) 00:18:31.492 3980.705 - 4004.978: 99.9467% ( 12) 00:18:31.492 4004.978 - 4029.250: 99.9924% ( 6) 00:18:31.492 5995.330 - 6019.603: 100.0000% ( 1) 00:18:31.492 00:18:31.492 Complete histogram 00:18:31.492 ================== 00:18:31.492 Range in us Cumulative Count 00:18:31.492 2.062 - 2.074: 0.5478% ( 72) 00:18:31.492 2.074 - 2.086: 23.6153% ( 3032) 00:18:31.492 2.086 - 2.098: 41.9127% ( 2405) 00:18:31.492 2.098 - 2.110: 43.8299% ( 252) 00:18:31.492 2.110 - 2.121: 49.4522% ( 739) 00:18:31.492 2.121 - 2.133: 52.1074% ( 349) 00:18:31.492 2.133 - 2.145: 55.7745% ( 482) 00:18:31.492 2.145 - 2.157: 68.0310% ( 1611) 00:18:31.492 2.157 - 2.169: 73.4175% ( 708) 00:18:31.492 2.169 - 2.181: 75.0304% ( 212) 00:18:31.492 2.181 - 2.193: 77.8911% ( 376) 00:18:31.492 2.193 - 2.204: 79.3594% ( 193) 00:18:31.492 2.204 - 2.216: 80.7669% ( 185) 00:18:31.492 2.216 - 2.228: 85.2708% ( 592) 00:18:31.492 2.228 - 2.240: 88.3825% ( 409) 00:18:31.492 2.240 - 2.252: 90.5965% ( 291) 00:18:31.492 2.252 - 2.264: 91.8822% ( 169) 00:18:31.492 2.264 - 2.276: 92.4224% ( 71) 00:18:31.492 2.276 - 2.287: 92.8637% ( 58) 00:18:31.492 2.287 - 2.299: 93.2288% ( 48) 00:18:31.492 2.299 - 2.311: 93.7538% ( 69) 00:18:31.492 2.311 - 2.323: 95.1080% ( 178) 00:18:31.492 2.323 - 2.335: 95.3819% ( 36) 00:18:31.492 2.335 - 2.347: 95.4504% ( 9) 00:18:31.492 2.347 - 2.359: 95.5037% ( 7) 00:18:31.492 2.359 - 2.370: 95.6939% ( 25) 00:18:31.492 2.370 - 2.382: 95.9145% ( 29) 00:18:31.492 2.382 - 2.394: 96.2797% ( 48) 00:18:31.492 2.394 - 2.406: 96.8274% ( 72) 00:18:31.492 2.406 - 2.418: 97.1850% ( 47) 00:18:31.492 2.418 - 2.430: 97.4133% ( 30) 00:18:31.492 2.430 - 2.441: 97.6339% ( 29) 00:18:31.492 2.441 - 2.453: 97.7480% ( 15) 00:18:31.492 2.453 - 2.465: 97.8926% ( 19) 00:18:31.492 2.465 - 2.477: 98.0067% ( 15) 00:18:31.492 2.477 - 2.489: 98.0523% ( 6) 00:18:31.492 2.489 - 2.501: 98.1284% ( 10) 00:18:31.492 2.501 - 2.513: 98.2121% ( 11) 00:18:31.492 2.513 - 2.524: 98.2654% ( 7) 00:18:31.492 2.524 - 2.536: 98.2882% ( 3) 00:18:31.492 2.536 - 2.548: 98.3186% ( 4) 00:18:31.492 2.548 - 2.560: 98.3414% ( 3) 00:18:31.492 2.560 - 2.572: 98.3567% ( 2) 00:18:31.492 2.572 - 2.584: 98.3643% ( 1) 00:18:31.492 2.584 - 2.596: 98.3795% ( 2) 00:18:31.492 2.607 - 2.619: 98.4023% ( 3) 00:18:31.492 2.619 - 2.631: 98.4099% ( 1) 00:18:31.492 2.631 - 2.643: 98.4175% ( 1) 00:18:31.492 2.643 - 2.655: 98.4251% ( 1) 00:18:31.492 2.702 - 2.714: 98.4404% ( 2) 00:18:31.492 2.726 - 2.738: 98.4480% ( 1) 00:18:31.492 2.750 - 2.761: 98.4556% ( 1) 00:18:31.492 2.761 - 2.773: 98.4632% ( 1) 00:18:31.492 2.785 - 2.797: 98.4784% ( 2) 00:18:31.492 2.844 - 2.856: 98.4860% ( 1) 00:18:31.492 2.856 - 2.868: 98.4936% ( 1) 00:18:31.492 2.987 - 2.999: 98.5012% ( 1) 00:18:31.492 3.129 - 3.153: 98.5088% ( 1) 00:18:31.492 3.200 - 3.224: 98.5164% ( 1) 00:18:31.492 3.366 - 3.390: 98.5240% ( 1) 00:18:31.492 3.390 - 3.413: 98.5469% ( 3) 00:18:31.492 3.413 - 3.437: 98.5621% ( 2) 00:18:31.492 3.437 - 3.461: 98.5697% ( 1) 00:18:31.492 3.461 - 3.484: 98.5925% ( 3) 00:18:31.492 3.484 - 3.508: 98.6153% ( 3) 00:18:31.492 3.508 - 3.532: 98.6306% ( 2) 00:18:31.492 3.532 - 3.556: 98.6458% ( 2) 00:18:31.492 3.603 - 3.627: 98.6610% ( 2) 00:18:31.492 3.650 - 3.674: 98.6838% ( 3) 00:18:31.493 3.674 - 3.698: 9[2024-10-01 01:36:10.952455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.493 8.6914% ( 1) 00:18:31.493 3.721 - 3.745: 98.7142% ( 3) 00:18:31.493 3.745 - 3.769: 98.7219% ( 1) 00:18:31.493 3.816 - 3.840: 98.7371% ( 2) 00:18:31.493 3.840 - 3.864: 98.7447% ( 1) 00:18:31.493 3.935 - 3.959: 98.7523% ( 1) 00:18:31.493 3.982 - 4.006: 98.7599% ( 1) 00:18:31.493 4.717 - 4.741: 98.7675% ( 1) 00:18:31.493 5.001 - 5.025: 98.7751% ( 1) 00:18:31.493 5.357 - 5.381: 98.7827% ( 1) 00:18:31.493 5.381 - 5.404: 98.7903% ( 1) 00:18:31.493 5.452 - 5.476: 98.7979% ( 1) 00:18:31.493 5.807 - 5.831: 98.8055% ( 1) 00:18:31.493 5.879 - 5.902: 98.8131% ( 1) 00:18:31.493 6.044 - 6.068: 98.8208% ( 1) 00:18:31.493 6.068 - 6.116: 98.8284% ( 1) 00:18:31.493 6.116 - 6.163: 98.8360% ( 1) 00:18:31.493 6.258 - 6.305: 98.8436% ( 1) 00:18:31.493 6.305 - 6.353: 98.8512% ( 1) 00:18:31.493 6.400 - 6.447: 98.8588% ( 1) 00:18:31.493 6.542 - 6.590: 98.8740% ( 2) 00:18:31.493 6.590 - 6.637: 98.8816% ( 1) 00:18:31.493 6.637 - 6.684: 98.8892% ( 1) 00:18:31.493 6.732 - 6.779: 98.8968% ( 1) 00:18:31.493 6.779 - 6.827: 98.9044% ( 1) 00:18:31.493 8.249 - 8.296: 98.9197% ( 2) 00:18:31.493 9.576 - 9.624: 98.9273% ( 1) 00:18:31.493 15.455 - 15.550: 98.9425% ( 2) 00:18:31.493 15.550 - 15.644: 98.9501% ( 1) 00:18:31.493 15.739 - 15.834: 98.9729% ( 3) 00:18:31.493 15.834 - 15.929: 99.0033% ( 4) 00:18:31.493 15.929 - 16.024: 99.0338% ( 4) 00:18:31.493 16.024 - 16.119: 99.0566% ( 3) 00:18:31.493 16.119 - 16.213: 99.0794% ( 3) 00:18:31.493 16.213 - 16.308: 99.1023% ( 3) 00:18:31.493 16.308 - 16.403: 99.1403% ( 5) 00:18:31.493 16.403 - 16.498: 99.1707% ( 4) 00:18:31.493 16.498 - 16.593: 99.2240% ( 7) 00:18:31.493 16.593 - 16.687: 99.3001% ( 10) 00:18:31.493 16.687 - 16.782: 99.3229% ( 3) 00:18:31.493 16.782 - 16.877: 99.3685% ( 6) 00:18:31.493 16.877 - 16.972: 99.3761% ( 1) 00:18:31.493 16.972 - 17.067: 99.3837% ( 1) 00:18:31.493 17.067 - 17.161: 99.3990% ( 2) 00:18:31.493 17.161 - 17.256: 99.4066% ( 1) 00:18:31.493 17.351 - 17.446: 99.4142% ( 1) 00:18:31.493 17.541 - 17.636: 99.4218% ( 1) 00:18:31.493 17.730 - 17.825: 99.4370% ( 2) 00:18:31.493 17.825 - 17.920: 99.4446% ( 1) 00:18:31.493 17.920 - 18.015: 99.4598% ( 2) 00:18:31.493 18.015 - 18.110: 99.4750% ( 2) 00:18:31.493 18.110 - 18.204: 99.4979% ( 3) 00:18:31.493 18.679 - 18.773: 99.5055% ( 1) 00:18:31.493 19.247 - 19.342: 99.5207% ( 2) 00:18:31.493 25.221 - 25.410: 99.5283% ( 1) 00:18:31.493 2014.625 - 2026.761: 99.5359% ( 1) 00:18:31.493 2026.761 - 2038.898: 99.5435% ( 1) 00:18:31.493 3980.705 - 4004.978: 99.8935% ( 46) 00:18:31.493 4004.978 - 4029.250: 100.0000% ( 14) 00:18:31.493 00:18:31.493 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:31.493 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:31.493 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:31.493 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:31.493 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.493 [ 00:18:31.493 { 00:18:31.493 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.493 "subtype": "Discovery", 00:18:31.493 "listen_addresses": [], 00:18:31.493 "allow_any_host": true, 00:18:31.493 "hosts": [] 00:18:31.493 }, 00:18:31.493 { 00:18:31.493 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.493 "subtype": "NVMe", 00:18:31.493 "listen_addresses": [ 00:18:31.493 { 00:18:31.493 "trtype": "VFIOUSER", 00:18:31.493 "adrfam": "IPv4", 00:18:31.493 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.493 "trsvcid": "0" 00:18:31.493 } 00:18:31.493 ], 00:18:31.493 "allow_any_host": true, 00:18:31.493 "hosts": [], 00:18:31.493 "serial_number": "SPDK1", 00:18:31.493 "model_number": "SPDK bdev Controller", 00:18:31.493 "max_namespaces": 32, 00:18:31.493 "min_cntlid": 1, 00:18:31.493 "max_cntlid": 65519, 00:18:31.493 "namespaces": [ 00:18:31.493 { 00:18:31.493 "nsid": 1, 00:18:31.493 "bdev_name": "Malloc1", 00:18:31.493 "name": "Malloc1", 00:18:31.493 "nguid": "09A2C1F7ED234CCFAAF5091157070105", 00:18:31.493 "uuid": "09a2c1f7-ed23-4ccf-aaf5-091157070105" 00:18:31.493 } 00:18:31.493 ] 00:18:31.493 }, 00:18:31.493 { 00:18:31.493 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.493 "subtype": "NVMe", 00:18:31.493 "listen_addresses": [ 00:18:31.493 { 00:18:31.493 "trtype": "VFIOUSER", 00:18:31.493 "adrfam": "IPv4", 00:18:31.493 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.493 "trsvcid": "0" 00:18:31.493 } 00:18:31.493 ], 00:18:31.493 "allow_any_host": true, 00:18:31.493 "hosts": [], 00:18:31.493 "serial_number": "SPDK2", 00:18:31.493 "model_number": "SPDK bdev Controller", 00:18:31.493 "max_namespaces": 32, 00:18:31.493 "min_cntlid": 1, 00:18:31.493 "max_cntlid": 65519, 00:18:31.493 "namespaces": [ 00:18:31.493 { 00:18:31.493 "nsid": 1, 00:18:31.493 "bdev_name": "Malloc2", 00:18:31.493 "name": "Malloc2", 00:18:31.493 "nguid": "06B67F0D147C4C328363FD2C5FFEA5FB", 00:18:31.493 "uuid": "06b67f0d-147c-4c32-8363-fd2c5ffea5fb" 00:18:31.493 } 00:18:31.493 ] 00:18:31.493 } 00:18:31.493 ] 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=893930 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:31.493 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:31.751 [2024-10-01 01:36:11.427453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.751 Malloc3 00:18:31.751 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:32.008 [2024-10-01 01:36:11.835496] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:32.008 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:32.265 Asynchronous Event Request test 00:18:32.265 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.265 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.265 Registering asynchronous event callbacks... 00:18:32.265 Starting namespace attribute notice tests for all controllers... 00:18:32.265 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:32.265 aer_cb - Changed Namespace 00:18:32.265 Cleaning up... 00:18:32.265 [ 00:18:32.265 { 00:18:32.265 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:32.265 "subtype": "Discovery", 00:18:32.265 "listen_addresses": [], 00:18:32.265 "allow_any_host": true, 00:18:32.265 "hosts": [] 00:18:32.265 }, 00:18:32.265 { 00:18:32.265 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:32.265 "subtype": "NVMe", 00:18:32.265 "listen_addresses": [ 00:18:32.265 { 00:18:32.265 "trtype": "VFIOUSER", 00:18:32.265 "adrfam": "IPv4", 00:18:32.265 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:32.265 "trsvcid": "0" 00:18:32.265 } 00:18:32.265 ], 00:18:32.265 "allow_any_host": true, 00:18:32.265 "hosts": [], 00:18:32.265 "serial_number": "SPDK1", 00:18:32.265 "model_number": "SPDK bdev Controller", 00:18:32.265 "max_namespaces": 32, 00:18:32.265 "min_cntlid": 1, 00:18:32.265 "max_cntlid": 65519, 00:18:32.265 "namespaces": [ 00:18:32.265 { 00:18:32.265 "nsid": 1, 00:18:32.265 "bdev_name": "Malloc1", 00:18:32.265 "name": "Malloc1", 00:18:32.265 "nguid": "09A2C1F7ED234CCFAAF5091157070105", 00:18:32.265 "uuid": "09a2c1f7-ed23-4ccf-aaf5-091157070105" 00:18:32.265 }, 00:18:32.265 { 00:18:32.265 "nsid": 2, 00:18:32.265 "bdev_name": "Malloc3", 00:18:32.265 "name": "Malloc3", 00:18:32.265 "nguid": "FF5579D7FADB4404B557EDA3FE7864E8", 00:18:32.265 "uuid": "ff5579d7-fadb-4404-b557-eda3fe7864e8" 00:18:32.265 } 00:18:32.265 ] 00:18:32.265 }, 00:18:32.265 { 00:18:32.265 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:32.265 "subtype": "NVMe", 00:18:32.265 "listen_addresses": [ 00:18:32.265 { 00:18:32.265 "trtype": "VFIOUSER", 00:18:32.265 "adrfam": "IPv4", 00:18:32.265 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:32.265 "trsvcid": "0" 00:18:32.265 } 00:18:32.265 ], 00:18:32.265 "allow_any_host": true, 00:18:32.265 "hosts": [], 00:18:32.265 "serial_number": "SPDK2", 00:18:32.265 "model_number": "SPDK bdev Controller", 00:18:32.266 "max_namespaces": 32, 00:18:32.266 "min_cntlid": 1, 00:18:32.266 "max_cntlid": 65519, 00:18:32.266 "namespaces": [ 00:18:32.266 { 00:18:32.266 "nsid": 1, 00:18:32.266 "bdev_name": "Malloc2", 00:18:32.266 "name": "Malloc2", 00:18:32.266 "nguid": "06B67F0D147C4C328363FD2C5FFEA5FB", 00:18:32.266 "uuid": "06b67f0d-147c-4c32-8363-fd2c5ffea5fb" 00:18:32.266 } 00:18:32.266 ] 00:18:32.266 } 00:18:32.266 ] 00:18:32.525 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 893930 00:18:32.525 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:32.525 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:32.525 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:32.525 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:32.525 [2024-10-01 01:36:12.145584] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:32.525 [2024-10-01 01:36:12.145628] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894060 ] 00:18:32.525 [2024-10-01 01:36:12.180156] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:32.525 [2024-10-01 01:36:12.188324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:32.525 [2024-10-01 01:36:12.188354] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2496964000 00:18:32.525 [2024-10-01 01:36:12.189314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.191007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.191314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.192333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.193345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.194352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.195357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:32.525 [2024-10-01 01:36:12.196360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:32.526 [2024-10-01 01:36:12.197369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:32.526 [2024-10-01 01:36:12.197391] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f249565c000 00:18:32.526 [2024-10-01 01:36:12.198510] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:32.526 [2024-10-01 01:36:12.214743] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:32.526 [2024-10-01 01:36:12.214780] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:32.526 [2024-10-01 01:36:12.219890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:32.526 [2024-10-01 01:36:12.219944] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:32.526 [2024-10-01 01:36:12.220067] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:32.526 [2024-10-01 01:36:12.220096] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:32.526 [2024-10-01 01:36:12.220108] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:32.526 [2024-10-01 01:36:12.220895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:32.526 [2024-10-01 01:36:12.220915] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:32.526 [2024-10-01 01:36:12.220927] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:32.526 [2024-10-01 01:36:12.221899] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:32.526 [2024-10-01 01:36:12.221919] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:32.526 [2024-10-01 01:36:12.221931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:32.526 [2024-10-01 01:36:12.222905] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:32.526 [2024-10-01 01:36:12.222928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:32.526 [2024-10-01 01:36:12.223912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:32.526 [2024-10-01 01:36:12.223931] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:32.526 [2024-10-01 01:36:12.223940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:32.526 [2024-10-01 01:36:12.223952] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:32.526 [2024-10-01 01:36:12.224061] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:32.526 [2024-10-01 01:36:12.224073] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:32.526 [2024-10-01 01:36:12.224082] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:32.526 [2024-10-01 01:36:12.224921] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:32.526 [2024-10-01 01:36:12.225921] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:32.526 [2024-10-01 01:36:12.226933] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:32.526 [2024-10-01 01:36:12.227924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.526 [2024-10-01 01:36:12.228013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:32.526 [2024-10-01 01:36:12.228946] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:32.526 [2024-10-01 01:36:12.228966] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:32.526 [2024-10-01 01:36:12.228975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.229024] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:32.526 [2024-10-01 01:36:12.229040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.229060] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:32.526 [2024-10-01 01:36:12.229070] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:32.526 [2024-10-01 01:36:12.229077] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.526 [2024-10-01 01:36:12.229094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:32.526 [2024-10-01 01:36:12.233013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:32.526 [2024-10-01 01:36:12.233036] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:32.526 [2024-10-01 01:36:12.233045] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:32.526 [2024-10-01 01:36:12.233053] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:32.526 [2024-10-01 01:36:12.233065] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:32.526 [2024-10-01 01:36:12.233074] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:32.526 [2024-10-01 01:36:12.233082] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:32.526 [2024-10-01 01:36:12.233090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.233103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.233119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:32.526 [2024-10-01 01:36:12.241008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:32.526 [2024-10-01 01:36:12.241033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.526 [2024-10-01 01:36:12.241063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.526 [2024-10-01 01:36:12.241076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.526 [2024-10-01 01:36:12.241088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.526 [2024-10-01 01:36:12.241097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.241115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.241130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:32.526 [2024-10-01 01:36:12.249008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:32.526 [2024-10-01 01:36:12.249026] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:32.526 [2024-10-01 01:36:12.249035] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.249047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.249061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.249076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:32.526 [2024-10-01 01:36:12.257010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:32.526 [2024-10-01 01:36:12.257084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.257100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.257113] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:32.526 [2024-10-01 01:36:12.257125] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:32.526 [2024-10-01 01:36:12.257131] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.526 [2024-10-01 01:36:12.257141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:32.526 [2024-10-01 01:36:12.265009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:32.526 [2024-10-01 01:36:12.265032] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:32.526 [2024-10-01 01:36:12.265052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.265066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:32.526 [2024-10-01 01:36:12.265079] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:32.526 [2024-10-01 01:36:12.265087] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:32.526 [2024-10-01 01:36:12.265093] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.527 [2024-10-01 01:36:12.265103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.273009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.273036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.273052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.273065] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:32.527 [2024-10-01 01:36:12.273073] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:32.527 [2024-10-01 01:36:12.273079] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.527 [2024-10-01 01:36:12.273089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.281009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.281031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281058] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281078] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281094] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:32.527 [2024-10-01 01:36:12.281108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:32.527 [2024-10-01 01:36:12.281117] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:32.527 [2024-10-01 01:36:12.281142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.289011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.289036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.297010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.297036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.305015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.305040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.313020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.313054] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:32.527 [2024-10-01 01:36:12.313065] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:32.527 [2024-10-01 01:36:12.313071] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:32.527 [2024-10-01 01:36:12.313078] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:32.527 [2024-10-01 01:36:12.313084] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:32.527 [2024-10-01 01:36:12.313094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:32.527 [2024-10-01 01:36:12.313106] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:32.527 [2024-10-01 01:36:12.313115] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:32.527 [2024-10-01 01:36:12.313121] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.527 [2024-10-01 01:36:12.313130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.313141] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:32.527 [2024-10-01 01:36:12.313149] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:32.527 [2024-10-01 01:36:12.313155] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.527 [2024-10-01 01:36:12.313164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.313176] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:32.527 [2024-10-01 01:36:12.313184] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:32.527 [2024-10-01 01:36:12.313190] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:32.527 [2024-10-01 01:36:12.313199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:32.527 [2024-10-01 01:36:12.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.321038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.321057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:32.527 [2024-10-01 01:36:12.321069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:32.527 ===================================================== 00:18:32.527 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:32.527 ===================================================== 00:18:32.527 Controller Capabilities/Features 00:18:32.527 ================================ 00:18:32.527 Vendor ID: 4e58 00:18:32.527 Subsystem Vendor ID: 4e58 00:18:32.527 Serial Number: SPDK2 00:18:32.527 Model Number: SPDK bdev Controller 00:18:32.527 Firmware Version: 25.01 00:18:32.527 Recommended Arb Burst: 6 00:18:32.527 IEEE OUI Identifier: 8d 6b 50 00:18:32.527 Multi-path I/O 00:18:32.527 May have multiple subsystem ports: Yes 00:18:32.527 May have multiple controllers: Yes 00:18:32.527 Associated with SR-IOV VF: No 00:18:32.527 Max Data Transfer Size: 131072 00:18:32.527 Max Number of Namespaces: 32 00:18:32.527 Max Number of I/O Queues: 127 00:18:32.527 NVMe Specification Version (VS): 1.3 00:18:32.527 NVMe Specification Version (Identify): 1.3 00:18:32.527 Maximum Queue Entries: 256 00:18:32.527 Contiguous Queues Required: Yes 00:18:32.527 Arbitration Mechanisms Supported 00:18:32.527 Weighted Round Robin: Not Supported 00:18:32.527 Vendor Specific: Not Supported 00:18:32.527 Reset Timeout: 15000 ms 00:18:32.527 Doorbell Stride: 4 bytes 00:18:32.527 NVM Subsystem Reset: Not Supported 00:18:32.527 Command Sets Supported 00:18:32.527 NVM Command Set: Supported 00:18:32.527 Boot Partition: Not Supported 00:18:32.527 Memory Page Size Minimum: 4096 bytes 00:18:32.527 Memory Page Size Maximum: 4096 bytes 00:18:32.527 Persistent Memory Region: Not Supported 00:18:32.527 Optional Asynchronous Events Supported 00:18:32.527 Namespace Attribute Notices: Supported 00:18:32.527 Firmware Activation Notices: Not Supported 00:18:32.527 ANA Change Notices: Not Supported 00:18:32.527 PLE Aggregate Log Change Notices: Not Supported 00:18:32.527 LBA Status Info Alert Notices: Not Supported 00:18:32.527 EGE Aggregate Log Change Notices: Not Supported 00:18:32.527 Normal NVM Subsystem Shutdown event: Not Supported 00:18:32.527 Zone Descriptor Change Notices: Not Supported 00:18:32.527 Discovery Log Change Notices: Not Supported 00:18:32.527 Controller Attributes 00:18:32.527 128-bit Host Identifier: Supported 00:18:32.527 Non-Operational Permissive Mode: Not Supported 00:18:32.527 NVM Sets: Not Supported 00:18:32.527 Read Recovery Levels: Not Supported 00:18:32.527 Endurance Groups: Not Supported 00:18:32.527 Predictable Latency Mode: Not Supported 00:18:32.527 Traffic Based Keep ALive: Not Supported 00:18:32.527 Namespace Granularity: Not Supported 00:18:32.527 SQ Associations: Not Supported 00:18:32.527 UUID List: Not Supported 00:18:32.527 Multi-Domain Subsystem: Not Supported 00:18:32.527 Fixed Capacity Management: Not Supported 00:18:32.527 Variable Capacity Management: Not Supported 00:18:32.527 Delete Endurance Group: Not Supported 00:18:32.527 Delete NVM Set: Not Supported 00:18:32.527 Extended LBA Formats Supported: Not Supported 00:18:32.527 Flexible Data Placement Supported: Not Supported 00:18:32.527 00:18:32.527 Controller Memory Buffer Support 00:18:32.527 ================================ 00:18:32.527 Supported: No 00:18:32.527 00:18:32.527 Persistent Memory Region Support 00:18:32.527 ================================ 00:18:32.527 Supported: No 00:18:32.527 00:18:32.527 Admin Command Set Attributes 00:18:32.527 ============================ 00:18:32.527 Security Send/Receive: Not Supported 00:18:32.527 Format NVM: Not Supported 00:18:32.528 Firmware Activate/Download: Not Supported 00:18:32.528 Namespace Management: Not Supported 00:18:32.528 Device Self-Test: Not Supported 00:18:32.528 Directives: Not Supported 00:18:32.528 NVMe-MI: Not Supported 00:18:32.528 Virtualization Management: Not Supported 00:18:32.528 Doorbell Buffer Config: Not Supported 00:18:32.528 Get LBA Status Capability: Not Supported 00:18:32.528 Command & Feature Lockdown Capability: Not Supported 00:18:32.528 Abort Command Limit: 4 00:18:32.528 Async Event Request Limit: 4 00:18:32.528 Number of Firmware Slots: N/A 00:18:32.528 Firmware Slot 1 Read-Only: N/A 00:18:32.528 Firmware Activation Without Reset: N/A 00:18:32.528 Multiple Update Detection Support: N/A 00:18:32.528 Firmware Update Granularity: No Information Provided 00:18:32.528 Per-Namespace SMART Log: No 00:18:32.528 Asymmetric Namespace Access Log Page: Not Supported 00:18:32.528 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:32.528 Command Effects Log Page: Supported 00:18:32.528 Get Log Page Extended Data: Supported 00:18:32.528 Telemetry Log Pages: Not Supported 00:18:32.528 Persistent Event Log Pages: Not Supported 00:18:32.528 Supported Log Pages Log Page: May Support 00:18:32.528 Commands Supported & Effects Log Page: Not Supported 00:18:32.528 Feature Identifiers & Effects Log Page:May Support 00:18:32.528 NVMe-MI Commands & Effects Log Page: May Support 00:18:32.528 Data Area 4 for Telemetry Log: Not Supported 00:18:32.528 Error Log Page Entries Supported: 128 00:18:32.528 Keep Alive: Supported 00:18:32.528 Keep Alive Granularity: 10000 ms 00:18:32.528 00:18:32.528 NVM Command Set Attributes 00:18:32.528 ========================== 00:18:32.528 Submission Queue Entry Size 00:18:32.528 Max: 64 00:18:32.528 Min: 64 00:18:32.528 Completion Queue Entry Size 00:18:32.528 Max: 16 00:18:32.528 Min: 16 00:18:32.528 Number of Namespaces: 32 00:18:32.528 Compare Command: Supported 00:18:32.528 Write Uncorrectable Command: Not Supported 00:18:32.528 Dataset Management Command: Supported 00:18:32.528 Write Zeroes Command: Supported 00:18:32.528 Set Features Save Field: Not Supported 00:18:32.528 Reservations: Not Supported 00:18:32.528 Timestamp: Not Supported 00:18:32.528 Copy: Supported 00:18:32.528 Volatile Write Cache: Present 00:18:32.528 Atomic Write Unit (Normal): 1 00:18:32.528 Atomic Write Unit (PFail): 1 00:18:32.528 Atomic Compare & Write Unit: 1 00:18:32.528 Fused Compare & Write: Supported 00:18:32.528 Scatter-Gather List 00:18:32.528 SGL Command Set: Supported (Dword aligned) 00:18:32.528 SGL Keyed: Not Supported 00:18:32.528 SGL Bit Bucket Descriptor: Not Supported 00:18:32.528 SGL Metadata Pointer: Not Supported 00:18:32.528 Oversized SGL: Not Supported 00:18:32.528 SGL Metadata Address: Not Supported 00:18:32.528 SGL Offset: Not Supported 00:18:32.528 Transport SGL Data Block: Not Supported 00:18:32.528 Replay Protected Memory Block: Not Supported 00:18:32.528 00:18:32.528 Firmware Slot Information 00:18:32.528 ========================= 00:18:32.528 Active slot: 1 00:18:32.528 Slot 1 Firmware Revision: 25.01 00:18:32.528 00:18:32.528 00:18:32.528 Commands Supported and Effects 00:18:32.528 ============================== 00:18:32.528 Admin Commands 00:18:32.528 -------------- 00:18:32.528 Get Log Page (02h): Supported 00:18:32.528 Identify (06h): Supported 00:18:32.528 Abort (08h): Supported 00:18:32.528 Set Features (09h): Supported 00:18:32.528 Get Features (0Ah): Supported 00:18:32.528 Asynchronous Event Request (0Ch): Supported 00:18:32.528 Keep Alive (18h): Supported 00:18:32.528 I/O Commands 00:18:32.528 ------------ 00:18:32.528 Flush (00h): Supported LBA-Change 00:18:32.528 Write (01h): Supported LBA-Change 00:18:32.528 Read (02h): Supported 00:18:32.528 Compare (05h): Supported 00:18:32.528 Write Zeroes (08h): Supported LBA-Change 00:18:32.528 Dataset Management (09h): Supported LBA-Change 00:18:32.528 Copy (19h): Supported LBA-Change 00:18:32.528 00:18:32.528 Error Log 00:18:32.528 ========= 00:18:32.528 00:18:32.528 Arbitration 00:18:32.528 =========== 00:18:32.528 Arbitration Burst: 1 00:18:32.528 00:18:32.528 Power Management 00:18:32.528 ================ 00:18:32.528 Number of Power States: 1 00:18:32.528 Current Power State: Power State #0 00:18:32.528 Power State #0: 00:18:32.528 Max Power: 0.00 W 00:18:32.528 Non-Operational State: Operational 00:18:32.528 Entry Latency: Not Reported 00:18:32.528 Exit Latency: Not Reported 00:18:32.528 Relative Read Throughput: 0 00:18:32.528 Relative Read Latency: 0 00:18:32.528 Relative Write Throughput: 0 00:18:32.528 Relative Write Latency: 0 00:18:32.528 Idle Power: Not Reported 00:18:32.528 Active Power: Not Reported 00:18:32.528 Non-Operational Permissive Mode: Not Supported 00:18:32.528 00:18:32.528 Health Information 00:18:32.528 ================== 00:18:32.528 Critical Warnings: 00:18:32.528 Available Spare Space: OK 00:18:32.528 Temperature: OK 00:18:32.528 Device Reliability: OK 00:18:32.528 Read Only: No 00:18:32.528 Volatile Memory Backup: OK 00:18:32.528 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:32.528 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:32.528 Available Spare: 0% 00:18:32.528 Available Sp[2024-10-01 01:36:12.321188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:32.528 [2024-10-01 01:36:12.329010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:32.528 [2024-10-01 01:36:12.329061] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:32.528 [2024-10-01 01:36:12.329079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.528 [2024-10-01 01:36:12.329091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.528 [2024-10-01 01:36:12.329101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.528 [2024-10-01 01:36:12.329110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.528 [2024-10-01 01:36:12.329197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:32.528 [2024-10-01 01:36:12.329218] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:32.528 [2024-10-01 01:36:12.330200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.528 [2024-10-01 01:36:12.330274] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:32.528 [2024-10-01 01:36:12.330296] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:32.528 [2024-10-01 01:36:12.331211] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:32.528 [2024-10-01 01:36:12.331235] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:32.528 [2024-10-01 01:36:12.331299] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:32.528 [2024-10-01 01:36:12.334013] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:32.528 are Threshold: 0% 00:18:32.528 Life Percentage Used: 0% 00:18:32.528 Data Units Read: 0 00:18:32.528 Data Units Written: 0 00:18:32.528 Host Read Commands: 0 00:18:32.528 Host Write Commands: 0 00:18:32.528 Controller Busy Time: 0 minutes 00:18:32.528 Power Cycles: 0 00:18:32.528 Power On Hours: 0 hours 00:18:32.528 Unsafe Shutdowns: 0 00:18:32.528 Unrecoverable Media Errors: 0 00:18:32.528 Lifetime Error Log Entries: 0 00:18:32.528 Warning Temperature Time: 0 minutes 00:18:32.528 Critical Temperature Time: 0 minutes 00:18:32.528 00:18:32.528 Number of Queues 00:18:32.528 ================ 00:18:32.528 Number of I/O Submission Queues: 127 00:18:32.528 Number of I/O Completion Queues: 127 00:18:32.528 00:18:32.528 Active Namespaces 00:18:32.528 ================= 00:18:32.528 Namespace ID:1 00:18:32.528 Error Recovery Timeout: Unlimited 00:18:32.528 Command Set Identifier: NVM (00h) 00:18:32.528 Deallocate: Supported 00:18:32.528 Deallocated/Unwritten Error: Not Supported 00:18:32.528 Deallocated Read Value: Unknown 00:18:32.528 Deallocate in Write Zeroes: Not Supported 00:18:32.528 Deallocated Guard Field: 0xFFFF 00:18:32.528 Flush: Supported 00:18:32.528 Reservation: Supported 00:18:32.528 Namespace Sharing Capabilities: Multiple Controllers 00:18:32.528 Size (in LBAs): 131072 (0GiB) 00:18:32.528 Capacity (in LBAs): 131072 (0GiB) 00:18:32.528 Utilization (in LBAs): 131072 (0GiB) 00:18:32.528 NGUID: 06B67F0D147C4C328363FD2C5FFEA5FB 00:18:32.528 UUID: 06b67f0d-147c-4c32-8363-fd2c5ffea5fb 00:18:32.528 Thin Provisioning: Not Supported 00:18:32.528 Per-NS Atomic Units: Yes 00:18:32.529 Atomic Boundary Size (Normal): 0 00:18:32.529 Atomic Boundary Size (PFail): 0 00:18:32.529 Atomic Boundary Offset: 0 00:18:32.529 Maximum Single Source Range Length: 65535 00:18:32.529 Maximum Copy Length: 65535 00:18:32.529 Maximum Source Range Count: 1 00:18:32.529 NGUID/EUI64 Never Reused: No 00:18:32.529 Namespace Write Protected: No 00:18:32.529 Number of LBA Formats: 1 00:18:32.529 Current LBA Format: LBA Format #00 00:18:32.529 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:32.529 00:18:32.529 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:32.787 [2024-10-01 01:36:12.562146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:38.047 Initializing NVMe Controllers 00:18:38.047 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:38.047 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:38.047 Initialization complete. Launching workers. 00:18:38.047 ======================================================== 00:18:38.047 Latency(us) 00:18:38.047 Device Information : IOPS MiB/s Average min max 00:18:38.047 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34039.60 132.97 3761.43 1164.72 8197.74 00:18:38.047 ======================================================== 00:18:38.047 Total : 34039.60 132.97 3761.43 1164.72 8197.74 00:18:38.047 00:18:38.047 [2024-10-01 01:36:17.674383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:38.047 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:38.305 [2024-10-01 01:36:17.921083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:43.569 Initializing NVMe Controllers 00:18:43.569 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:43.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:43.569 Initialization complete. Launching workers. 00:18:43.569 ======================================================== 00:18:43.569 Latency(us) 00:18:43.569 Device Information : IOPS MiB/s Average min max 00:18:43.569 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31287.76 122.22 4091.11 1214.46 7464.25 00:18:43.569 ======================================================== 00:18:43.569 Total : 31287.76 122.22 4091.11 1214.46 7464.25 00:18:43.569 00:18:43.569 [2024-10-01 01:36:22.943362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:43.569 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:43.569 [2024-10-01 01:36:23.165731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.831 [2024-10-01 01:36:28.301143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.831 Initializing NVMe Controllers 00:18:48.831 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.831 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:48.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:48.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:48.831 Initialization complete. Launching workers. 00:18:48.831 Starting thread on core 2 00:18:48.831 Starting thread on core 3 00:18:48.831 Starting thread on core 1 00:18:48.831 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:48.831 [2024-10-01 01:36:28.607484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.114 [2024-10-01 01:36:31.667279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.114 Initializing NVMe Controllers 00:18:52.114 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.114 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:52.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:52.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:52.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:52.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:52.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:52.114 Initialization complete. Launching workers. 00:18:52.114 Starting thread on core 1 with urgent priority queue 00:18:52.114 Starting thread on core 2 with urgent priority queue 00:18:52.114 Starting thread on core 3 with urgent priority queue 00:18:52.114 Starting thread on core 0 with urgent priority queue 00:18:52.114 SPDK bdev Controller (SPDK2 ) core 0: 2689.00 IO/s 37.19 secs/100000 ios 00:18:52.114 SPDK bdev Controller (SPDK2 ) core 1: 3207.33 IO/s 31.18 secs/100000 ios 00:18:52.114 SPDK bdev Controller (SPDK2 ) core 2: 3093.00 IO/s 32.33 secs/100000 ios 00:18:52.114 SPDK bdev Controller (SPDK2 ) core 3: 3514.00 IO/s 28.46 secs/100000 ios 00:18:52.114 ======================================================== 00:18:52.114 00:18:52.114 01:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:52.372 [2024-10-01 01:36:31.968499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.372 Initializing NVMe Controllers 00:18:52.372 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.372 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.372 Namespace ID: 1 size: 0GB 00:18:52.372 Initialization complete. 00:18:52.372 INFO: using host memory buffer for IO 00:18:52.372 Hello world! 00:18:52.372 [2024-10-01 01:36:31.980572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.372 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:52.630 [2024-10-01 01:36:32.280852] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.563 Initializing NVMe Controllers 00:18:53.563 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.563 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:53.563 Initialization complete. Launching workers. 00:18:53.563 submit (in ns) avg, min, max = 6018.1, 3515.6, 4016017.8 00:18:53.563 complete (in ns) avg, min, max = 24541.2, 2065.6, 4016365.6 00:18:53.563 00:18:53.563 Submit histogram 00:18:53.563 ================ 00:18:53.563 Range in us Cumulative Count 00:18:53.563 3.508 - 3.532: 0.3497% ( 47) 00:18:53.563 3.532 - 3.556: 1.3171% ( 130) 00:18:53.563 3.556 - 3.579: 4.1074% ( 375) 00:18:53.563 3.579 - 3.603: 9.1153% ( 673) 00:18:53.563 3.603 - 3.627: 18.0817% ( 1205) 00:18:53.563 3.627 - 3.650: 28.8861% ( 1452) 00:18:53.563 3.650 - 3.674: 37.6442% ( 1177) 00:18:53.563 3.674 - 3.698: 44.9661% ( 984) 00:18:53.563 3.698 - 3.721: 51.8565% ( 926) 00:18:53.563 3.721 - 3.745: 57.2885% ( 730) 00:18:53.563 3.745 - 3.769: 62.1028% ( 647) 00:18:53.563 3.769 - 3.793: 65.7415% ( 489) 00:18:53.563 3.793 - 3.816: 68.3012% ( 344) 00:18:53.563 3.816 - 3.840: 71.2479% ( 396) 00:18:53.563 3.840 - 3.864: 75.2363% ( 536) 00:18:53.563 3.864 - 3.887: 78.9791% ( 503) 00:18:53.563 3.887 - 3.911: 82.7666% ( 509) 00:18:53.563 3.911 - 3.935: 85.5718% ( 377) 00:18:53.563 3.935 - 3.959: 87.7000% ( 286) 00:18:53.563 3.959 - 3.982: 89.2626% ( 210) 00:18:53.563 3.982 - 4.006: 90.8327% ( 211) 00:18:53.563 4.006 - 4.030: 92.0827% ( 168) 00:18:53.563 4.030 - 4.053: 92.9831% ( 121) 00:18:53.563 4.053 - 4.077: 93.8016% ( 110) 00:18:53.563 4.077 - 4.101: 94.7838% ( 132) 00:18:53.563 4.101 - 4.124: 95.4535% ( 90) 00:18:53.563 4.124 - 4.148: 95.7958% ( 46) 00:18:53.563 4.148 - 4.172: 96.1976% ( 54) 00:18:53.563 4.172 - 4.196: 96.4953% ( 40) 00:18:53.563 4.196 - 4.219: 96.6441% ( 20) 00:18:53.564 4.219 - 4.243: 96.7855% ( 19) 00:18:53.564 4.243 - 4.267: 96.8673% ( 11) 00:18:53.564 4.267 - 4.290: 96.9938% ( 17) 00:18:53.564 4.290 - 4.314: 97.0980% ( 14) 00:18:53.564 4.314 - 4.338: 97.1724% ( 10) 00:18:53.564 4.338 - 4.361: 97.2245% ( 7) 00:18:53.564 4.361 - 4.385: 97.2766% ( 7) 00:18:53.564 4.385 - 4.409: 97.2989% ( 3) 00:18:53.564 4.409 - 4.433: 97.3361% ( 5) 00:18:53.564 4.433 - 4.456: 97.3436% ( 1) 00:18:53.564 4.456 - 4.480: 97.3733% ( 4) 00:18:53.564 4.480 - 4.504: 97.3956% ( 3) 00:18:53.564 4.504 - 4.527: 97.4180% ( 3) 00:18:53.564 4.527 - 4.551: 97.4254% ( 1) 00:18:53.564 4.551 - 4.575: 97.4403% ( 2) 00:18:53.564 4.575 - 4.599: 97.4700% ( 4) 00:18:53.564 4.622 - 4.646: 97.4849% ( 2) 00:18:53.564 4.646 - 4.670: 97.5073% ( 3) 00:18:53.564 4.670 - 4.693: 97.5445% ( 5) 00:18:53.564 4.693 - 4.717: 97.5593% ( 2) 00:18:53.564 4.717 - 4.741: 97.6189% ( 8) 00:18:53.564 4.741 - 4.764: 97.6412% ( 3) 00:18:53.564 4.764 - 4.788: 97.6784% ( 5) 00:18:53.564 4.788 - 4.812: 97.7305% ( 7) 00:18:53.564 4.812 - 4.836: 97.7751% ( 6) 00:18:53.564 4.836 - 4.859: 97.8123% ( 5) 00:18:53.564 4.859 - 4.883: 97.8198% ( 1) 00:18:53.564 4.883 - 4.907: 97.8495% ( 4) 00:18:53.564 4.907 - 4.930: 97.8793% ( 4) 00:18:53.564 4.930 - 4.954: 97.9165% ( 5) 00:18:53.564 4.954 - 4.978: 97.9463% ( 4) 00:18:53.564 4.978 - 5.001: 97.9760% ( 4) 00:18:53.564 5.001 - 5.025: 98.0132% ( 5) 00:18:53.564 5.025 - 5.049: 98.0728% ( 8) 00:18:53.564 5.049 - 5.073: 98.0802% ( 1) 00:18:53.564 5.073 - 5.096: 98.0877% ( 1) 00:18:53.564 5.096 - 5.120: 98.1025% ( 2) 00:18:53.564 5.120 - 5.144: 98.1323% ( 4) 00:18:53.564 5.144 - 5.167: 98.1472% ( 2) 00:18:53.564 5.167 - 5.191: 98.1546% ( 1) 00:18:53.564 5.191 - 5.215: 98.1695% ( 2) 00:18:53.564 5.215 - 5.239: 98.1769% ( 1) 00:18:53.564 5.239 - 5.262: 98.2067% ( 4) 00:18:53.564 5.262 - 5.286: 98.2216% ( 2) 00:18:53.564 5.286 - 5.310: 98.2365% ( 2) 00:18:53.564 5.357 - 5.381: 98.2439% ( 1) 00:18:53.564 5.404 - 5.428: 98.2514% ( 1) 00:18:53.564 5.428 - 5.452: 98.2588% ( 1) 00:18:53.564 5.499 - 5.523: 98.2662% ( 1) 00:18:53.564 5.665 - 5.689: 98.2811% ( 2) 00:18:53.564 5.760 - 5.784: 98.3034% ( 3) 00:18:53.564 5.784 - 5.807: 98.3109% ( 1) 00:18:53.564 5.926 - 5.950: 98.3332% ( 3) 00:18:53.564 5.950 - 5.973: 98.3407% ( 1) 00:18:53.564 5.997 - 6.021: 98.3481% ( 1) 00:18:53.564 6.210 - 6.258: 98.3704% ( 3) 00:18:53.564 6.258 - 6.305: 98.3779% ( 1) 00:18:53.564 6.447 - 6.495: 98.3853% ( 1) 00:18:53.564 6.495 - 6.542: 98.3927% ( 1) 00:18:53.564 6.542 - 6.590: 98.4002% ( 1) 00:18:53.564 6.590 - 6.637: 98.4151% ( 2) 00:18:53.564 6.684 - 6.732: 98.4225% ( 1) 00:18:53.564 6.827 - 6.874: 98.4374% ( 2) 00:18:53.564 6.874 - 6.921: 98.4448% ( 1) 00:18:53.564 6.921 - 6.969: 98.4746% ( 4) 00:18:53.564 6.969 - 7.016: 98.4895% ( 2) 00:18:53.564 7.064 - 7.111: 98.4969% ( 1) 00:18:53.564 7.159 - 7.206: 98.5044% ( 1) 00:18:53.564 7.206 - 7.253: 98.5192% ( 2) 00:18:53.564 7.253 - 7.301: 98.5341% ( 2) 00:18:53.564 7.443 - 7.490: 98.5490% ( 2) 00:18:53.564 7.538 - 7.585: 98.5639% ( 2) 00:18:53.564 7.633 - 7.680: 98.5713% ( 1) 00:18:53.564 7.727 - 7.775: 98.5862% ( 2) 00:18:53.564 7.822 - 7.870: 98.6011% ( 2) 00:18:53.564 7.870 - 7.917: 98.6160% ( 2) 00:18:53.564 7.917 - 7.964: 98.6234% ( 1) 00:18:53.564 7.964 - 8.012: 98.6383% ( 2) 00:18:53.564 8.012 - 8.059: 98.6457% ( 1) 00:18:53.564 8.059 - 8.107: 98.6532% ( 1) 00:18:53.564 8.107 - 8.154: 98.6606% ( 1) 00:18:53.564 8.154 - 8.201: 98.6829% ( 3) 00:18:53.564 8.249 - 8.296: 98.6904% ( 1) 00:18:53.564 8.296 - 8.344: 98.6978% ( 1) 00:18:53.564 8.344 - 8.391: 98.7053% ( 1) 00:18:53.564 8.391 - 8.439: 98.7201% ( 2) 00:18:53.564 8.486 - 8.533: 98.7276% ( 1) 00:18:53.564 8.581 - 8.628: 98.7425% ( 2) 00:18:53.564 8.628 - 8.676: 98.7499% ( 1) 00:18:53.564 8.723 - 8.770: 98.7573% ( 1) 00:18:53.564 8.818 - 8.865: 98.7722% ( 2) 00:18:53.564 8.865 - 8.913: 98.7871% ( 2) 00:18:53.564 8.960 - 9.007: 98.7946% ( 1) 00:18:53.564 9.007 - 9.055: 98.8169% ( 3) 00:18:53.564 9.150 - 9.197: 98.8318% ( 2) 00:18:53.564 9.244 - 9.292: 98.8392% ( 1) 00:18:53.564 9.529 - 9.576: 98.8466% ( 1) 00:18:53.564 9.671 - 9.719: 98.8541% ( 1) 00:18:53.564 9.719 - 9.766: 98.8615% ( 1) 00:18:53.564 9.813 - 9.861: 98.8690% ( 1) 00:18:53.564 9.956 - 10.003: 98.8764% ( 1) 00:18:53.564 10.430 - 10.477: 98.8838% ( 1) 00:18:53.564 10.524 - 10.572: 98.8913% ( 1) 00:18:53.564 10.572 - 10.619: 98.9062% ( 2) 00:18:53.564 11.141 - 11.188: 98.9136% ( 1) 00:18:53.564 11.662 - 11.710: 98.9211% ( 1) 00:18:53.564 11.710 - 11.757: 98.9285% ( 1) 00:18:53.564 11.804 - 11.852: 98.9359% ( 1) 00:18:53.564 11.899 - 11.947: 98.9508% ( 2) 00:18:53.564 11.947 - 11.994: 98.9583% ( 1) 00:18:53.564 12.231 - 12.326: 98.9657% ( 1) 00:18:53.564 12.800 - 12.895: 98.9731% ( 1) 00:18:53.564 12.895 - 12.990: 98.9806% ( 1) 00:18:53.564 13.274 - 13.369: 98.9880% ( 1) 00:18:53.564 13.369 - 13.464: 98.9955% ( 1) 00:18:53.564 13.464 - 13.559: 99.0029% ( 1) 00:18:53.564 13.843 - 13.938: 99.0103% ( 1) 00:18:53.564 14.033 - 14.127: 99.0252% ( 2) 00:18:53.564 14.127 - 14.222: 99.0327% ( 1) 00:18:53.564 14.222 - 14.317: 99.0475% ( 2) 00:18:53.564 14.507 - 14.601: 99.0624% ( 2) 00:18:53.564 14.601 - 14.696: 99.0699% ( 1) 00:18:53.564 14.696 - 14.791: 99.0922% ( 3) 00:18:53.564 14.981 - 15.076: 99.1071% ( 2) 00:18:53.564 15.076 - 15.170: 99.1145% ( 1) 00:18:53.564 15.265 - 15.360: 99.1220% ( 1) 00:18:53.564 17.161 - 17.256: 99.1294% ( 1) 00:18:53.564 17.351 - 17.446: 99.1443% ( 2) 00:18:53.564 17.446 - 17.541: 99.1666% ( 3) 00:18:53.564 17.541 - 17.636: 99.2038% ( 5) 00:18:53.564 17.636 - 17.730: 99.2336% ( 4) 00:18:53.564 17.730 - 17.825: 99.2708% ( 5) 00:18:53.564 17.825 - 17.920: 99.2857% ( 2) 00:18:53.564 17.920 - 18.015: 99.3303% ( 6) 00:18:53.564 18.015 - 18.110: 99.3601% ( 4) 00:18:53.564 18.110 - 18.204: 99.3898% ( 4) 00:18:53.564 18.204 - 18.299: 99.4568% ( 9) 00:18:53.564 18.299 - 18.394: 99.5089% ( 7) 00:18:53.564 18.394 - 18.489: 99.5610% ( 7) 00:18:53.564 18.489 - 18.584: 99.6726% ( 15) 00:18:53.564 18.584 - 18.679: 99.6949% ( 3) 00:18:53.564 18.679 - 18.773: 99.7396% ( 6) 00:18:53.564 18.773 - 18.868: 99.7619% ( 3) 00:18:53.564 18.868 - 18.963: 99.7768% ( 2) 00:18:53.564 18.963 - 19.058: 99.7991% ( 3) 00:18:53.564 19.058 - 19.153: 99.8214% ( 3) 00:18:53.564 19.153 - 19.247: 99.8363% ( 2) 00:18:53.564 19.247 - 19.342: 99.8512% ( 2) 00:18:53.564 19.342 - 19.437: 99.8735% ( 3) 00:18:53.564 19.532 - 19.627: 99.8809% ( 1) 00:18:53.564 19.627 - 19.721: 99.8884% ( 1) 00:18:53.564 20.480 - 20.575: 99.8958% ( 1) 00:18:53.564 20.575 - 20.670: 99.9033% ( 1) 00:18:53.564 21.902 - 21.997: 99.9107% ( 1) 00:18:53.564 22.092 - 22.187: 99.9181% ( 1) 00:18:53.564 23.135 - 23.230: 99.9256% ( 1) 00:18:53.564 30.341 - 30.530: 99.9330% ( 1) 00:18:53.564 30.910 - 31.099: 99.9405% ( 1) 00:18:53.564 31.858 - 32.047: 99.9479% ( 1) 00:18:53.564 3980.705 - 4004.978: 99.9926% ( 6) 00:18:53.564 4004.978 - 4029.250: 100.0000% ( 1) 00:18:53.564 00:18:53.564 Complete histogram 00:18:53.564 ================== 00:18:53.564 Range in us Cumulative Count 00:18:53.564 2.062 - 2.074: 1.6742% ( 225) 00:18:53.564 2.074 - 2.086: 34.9133% ( 4467) 00:18:53.564 2.086 - 2.098: 50.2493% ( 2061) 00:18:53.564 2.098 - 2.110: 52.5262% ( 306) 00:18:53.564 2.110 - 2.121: 59.3571% ( 918) 00:18:53.564 2.121 - 2.133: 62.3558% ( 403) 00:18:53.564 2.133 - 2.145: 66.4930% ( 556) 00:18:53.564 2.145 - 2.157: 78.0043% ( 1547) 00:18:53.564 2.157 - 2.169: 81.1444% ( 422) 00:18:53.564 2.169 - 2.181: 82.7219% ( 212) 00:18:53.564 2.181 - 2.193: 85.2519% ( 340) 00:18:53.564 2.193 - 2.204: 86.4424% ( 160) 00:18:53.564 2.204 - 2.216: 87.5660% ( 151) 00:18:53.564 2.216 - 2.228: 90.4457% ( 387) 00:18:53.564 2.228 - 2.240: 92.6631% ( 298) 00:18:53.564 2.240 - 2.252: 93.8537% ( 160) 00:18:53.564 2.252 - 2.264: 94.4936% ( 86) 00:18:53.564 2.264 - 2.276: 94.8657% ( 50) 00:18:53.564 2.276 - 2.287: 95.1187% ( 34) 00:18:53.564 2.287 - 2.299: 95.3717% ( 34) 00:18:53.564 2.299 - 2.311: 95.7586% ( 52) 00:18:53.564 2.311 - 2.323: 96.0265% ( 36) 00:18:53.564 2.323 - 2.335: 96.0786% ( 7) 00:18:53.564 2.335 - 2.347: 96.1232% ( 6) 00:18:53.564 2.347 - 2.359: 96.1604% ( 5) 00:18:53.565 2.359 - 2.370: 96.2944% ( 18) 00:18:53.565 2.370 - 2.382: 96.5176% ( 30) 00:18:53.565 2.382 - 2.394: 97.1129% ( 80) 00:18:53.565 2.394 - 2.406: 97.4700% ( 48) 00:18:53.565 2.406 - 2.418: 97.7751% ( 41) 00:18:53.565 2.418 - 2.430: 97.9091% ( 18) 00:18:53.565 2.430 - 2.441: 98.0356% ( 17) 00:18:53.565 2.441 - 2.453: 98.1174% ( 11) 00:18:53.565 2.453 - 2.465: 98.1769% ( 8) 00:18:53.565 2.465 - 2.477: 98.2514% ( 10) 00:18:53.565 2.477 - 2.489: 98.2960% ( 6) 00:18:53.565 2.489 - 2.501: 98.3481% ( 7) 00:18:53.565 2.501 - 2.513: 98.3779% ( 4) 00:18:53.565 2.513 - 2.524: 98.3853% ( 1) 00:18:53.565 2.524 - 2.536: 98.4225% ( 5) 00:18:53.565 2.536 - 2.548: 98.4448% ( 3) 00:18:53.565 2.548 - 2.560: 98.4597% ( 2) 00:18:53.565 2.560 - 2.572: 98.4671% ( 1) 00:18:53.565 2.596 - 2.607: 98.4820% ( 2) 00:18:53.565 2.607 - 2.619: 98.4895% ( 1) 00:18:53.565 2.619 - 2.631: 98.4969% ( 1) 00:18:53.565 2.631 - 2.643: 98.5192% ( 3) 00:18:53.565 2.643 - 2.655: 98.5267% ( 1) 00:18:53.565 2.655 - 2.667: 98.5416% ( 2) 00:18:53.565 2.690 - 2.702: 98.5490% ( 1) 00:18:53.565 2.702 - 2.714: 98.5564% ( 1) 00:18:53.565 2.714 - 2.726: 98.5639% ( 1) 00:18:53.565 2.809 - 2.821: 98.5713% ( 1) 00:18:53.565 3.271 - 3.295: 98.5788% ( 1) 00:18:53.565 3.342 - 3.366: 98.5862% ( 1) 00:18:53.565 3.366 - 3.390: 98.5936% ( 1) 00:18:53.565 3.413 - 3.437: 98.6160% ( 3) 00:18:53.565 3.437 - 3.461: 98.6457% ( 4) 00:18:53.565 3.484 - 3.508: 98.6532% ( 1) 00:18:53.565 3.508 - 3.532: 98.6606% ( 1) 00:18:53.565 3.532 - 3.556: 9[2024-10-01 01:36:33.384803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.823 8.6681% ( 1) 00:18:53.823 3.556 - 3.579: 98.6755% ( 1) 00:18:53.823 3.579 - 3.603: 98.7127% ( 5) 00:18:53.823 3.603 - 3.627: 98.7276% ( 2) 00:18:53.823 3.674 - 3.698: 98.7350% ( 1) 00:18:53.823 3.769 - 3.793: 98.7499% ( 2) 00:18:53.823 3.793 - 3.816: 98.7722% ( 3) 00:18:53.823 3.840 - 3.864: 98.7797% ( 1) 00:18:53.823 4.409 - 4.433: 98.7871% ( 1) 00:18:53.823 4.575 - 4.599: 98.7946% ( 1) 00:18:53.823 5.049 - 5.073: 98.8020% ( 1) 00:18:53.823 5.096 - 5.120: 98.8094% ( 1) 00:18:53.823 5.215 - 5.239: 98.8169% ( 1) 00:18:53.823 5.262 - 5.286: 98.8243% ( 1) 00:18:53.823 5.381 - 5.404: 98.8318% ( 1) 00:18:53.823 5.997 - 6.021: 98.8392% ( 1) 00:18:53.823 6.044 - 6.068: 98.8466% ( 1) 00:18:53.823 6.163 - 6.210: 98.8541% ( 1) 00:18:53.823 6.210 - 6.258: 98.8615% ( 1) 00:18:53.823 6.258 - 6.305: 98.8690% ( 1) 00:18:53.823 6.495 - 6.542: 98.8764% ( 1) 00:18:53.823 6.637 - 6.684: 98.8913% ( 2) 00:18:53.823 6.874 - 6.921: 98.9062% ( 2) 00:18:53.823 7.206 - 7.253: 98.9136% ( 1) 00:18:53.823 7.443 - 7.490: 98.9211% ( 1) 00:18:53.823 7.775 - 7.822: 98.9285% ( 1) 00:18:53.823 8.913 - 8.960: 98.9359% ( 1) 00:18:53.823 9.529 - 9.576: 98.9434% ( 1) 00:18:53.823 11.188 - 11.236: 98.9508% ( 1) 00:18:53.823 12.895 - 12.990: 98.9583% ( 1) 00:18:53.823 13.084 - 13.179: 98.9657% ( 1) 00:18:53.823 15.644 - 15.739: 98.9955% ( 4) 00:18:53.823 15.739 - 15.834: 99.0327% ( 5) 00:18:53.823 15.929 - 16.024: 99.0475% ( 2) 00:18:53.823 16.024 - 16.119: 99.0699% ( 3) 00:18:53.823 16.119 - 16.213: 99.0773% ( 1) 00:18:53.823 16.213 - 16.308: 99.0996% ( 3) 00:18:53.823 16.308 - 16.403: 99.1220% ( 3) 00:18:53.823 16.403 - 16.498: 99.1592% ( 5) 00:18:53.823 16.498 - 16.593: 99.2187% ( 8) 00:18:53.823 16.593 - 16.687: 99.2708% ( 7) 00:18:53.823 16.687 - 16.782: 99.2857% ( 2) 00:18:53.823 16.782 - 16.877: 99.3005% ( 2) 00:18:53.823 16.877 - 16.972: 99.3154% ( 2) 00:18:53.823 17.067 - 17.161: 99.3377% ( 3) 00:18:53.823 17.256 - 17.351: 99.3452% ( 1) 00:18:53.823 17.446 - 17.541: 99.3526% ( 1) 00:18:53.823 17.541 - 17.636: 99.3601% ( 1) 00:18:53.823 17.636 - 17.730: 99.3750% ( 2) 00:18:53.823 17.730 - 17.825: 99.3824% ( 1) 00:18:53.823 17.920 - 18.015: 99.3898% ( 1) 00:18:53.823 18.110 - 18.204: 99.3973% ( 1) 00:18:53.823 18.299 - 18.394: 99.4196% ( 3) 00:18:53.823 18.489 - 18.584: 99.4345% ( 2) 00:18:53.823 24.841 - 25.031: 99.4419% ( 1) 00:18:53.823 3980.705 - 4004.978: 99.8586% ( 56) 00:18:53.823 4004.978 - 4029.250: 100.0000% ( 19) 00:18:53.823 00:18:53.823 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:53.823 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:53.823 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:53.823 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:53.823 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:54.081 [ 00:18:54.081 { 00:18:54.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:54.081 "subtype": "Discovery", 00:18:54.081 "listen_addresses": [], 00:18:54.081 "allow_any_host": true, 00:18:54.081 "hosts": [] 00:18:54.081 }, 00:18:54.081 { 00:18:54.081 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:54.081 "subtype": "NVMe", 00:18:54.081 "listen_addresses": [ 00:18:54.081 { 00:18:54.081 "trtype": "VFIOUSER", 00:18:54.081 "adrfam": "IPv4", 00:18:54.081 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:54.081 "trsvcid": "0" 00:18:54.081 } 00:18:54.081 ], 00:18:54.081 "allow_any_host": true, 00:18:54.081 "hosts": [], 00:18:54.081 "serial_number": "SPDK1", 00:18:54.081 "model_number": "SPDK bdev Controller", 00:18:54.081 "max_namespaces": 32, 00:18:54.081 "min_cntlid": 1, 00:18:54.081 "max_cntlid": 65519, 00:18:54.081 "namespaces": [ 00:18:54.081 { 00:18:54.081 "nsid": 1, 00:18:54.081 "bdev_name": "Malloc1", 00:18:54.081 "name": "Malloc1", 00:18:54.081 "nguid": "09A2C1F7ED234CCFAAF5091157070105", 00:18:54.081 "uuid": "09a2c1f7-ed23-4ccf-aaf5-091157070105" 00:18:54.081 }, 00:18:54.081 { 00:18:54.081 "nsid": 2, 00:18:54.081 "bdev_name": "Malloc3", 00:18:54.081 "name": "Malloc3", 00:18:54.081 "nguid": "FF5579D7FADB4404B557EDA3FE7864E8", 00:18:54.081 "uuid": "ff5579d7-fadb-4404-b557-eda3fe7864e8" 00:18:54.081 } 00:18:54.081 ] 00:18:54.081 }, 00:18:54.081 { 00:18:54.081 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:54.081 "subtype": "NVMe", 00:18:54.081 "listen_addresses": [ 00:18:54.081 { 00:18:54.081 "trtype": "VFIOUSER", 00:18:54.081 "adrfam": "IPv4", 00:18:54.081 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:54.081 "trsvcid": "0" 00:18:54.081 } 00:18:54.081 ], 00:18:54.081 "allow_any_host": true, 00:18:54.081 "hosts": [], 00:18:54.081 "serial_number": "SPDK2", 00:18:54.081 "model_number": "SPDK bdev Controller", 00:18:54.081 "max_namespaces": 32, 00:18:54.081 "min_cntlid": 1, 00:18:54.081 "max_cntlid": 65519, 00:18:54.081 "namespaces": [ 00:18:54.081 { 00:18:54.081 "nsid": 1, 00:18:54.081 "bdev_name": "Malloc2", 00:18:54.081 "name": "Malloc2", 00:18:54.081 "nguid": "06B67F0D147C4C328363FD2C5FFEA5FB", 00:18:54.081 "uuid": "06b67f0d-147c-4c32-8363-fd2c5ffea5fb" 00:18:54.081 } 00:18:54.081 ] 00:18:54.081 } 00:18:54.081 ] 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=896573 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:54.081 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:54.081 [2024-10-01 01:36:33.912520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:54.339 Malloc4 00:18:54.339 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:54.596 [2024-10-01 01:36:34.311500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:54.596 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:54.596 Asynchronous Event Request test 00:18:54.596 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:54.596 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:54.596 Registering asynchronous event callbacks... 00:18:54.596 Starting namespace attribute notice tests for all controllers... 00:18:54.596 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:54.596 aer_cb - Changed Namespace 00:18:54.596 Cleaning up... 00:18:54.854 [ 00:18:54.854 { 00:18:54.854 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:54.854 "subtype": "Discovery", 00:18:54.854 "listen_addresses": [], 00:18:54.854 "allow_any_host": true, 00:18:54.854 "hosts": [] 00:18:54.854 }, 00:18:54.854 { 00:18:54.854 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:54.854 "subtype": "NVMe", 00:18:54.854 "listen_addresses": [ 00:18:54.854 { 00:18:54.854 "trtype": "VFIOUSER", 00:18:54.854 "adrfam": "IPv4", 00:18:54.854 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:54.854 "trsvcid": "0" 00:18:54.854 } 00:18:54.854 ], 00:18:54.854 "allow_any_host": true, 00:18:54.854 "hosts": [], 00:18:54.854 "serial_number": "SPDK1", 00:18:54.854 "model_number": "SPDK bdev Controller", 00:18:54.854 "max_namespaces": 32, 00:18:54.854 "min_cntlid": 1, 00:18:54.854 "max_cntlid": 65519, 00:18:54.854 "namespaces": [ 00:18:54.854 { 00:18:54.854 "nsid": 1, 00:18:54.854 "bdev_name": "Malloc1", 00:18:54.854 "name": "Malloc1", 00:18:54.854 "nguid": "09A2C1F7ED234CCFAAF5091157070105", 00:18:54.854 "uuid": "09a2c1f7-ed23-4ccf-aaf5-091157070105" 00:18:54.854 }, 00:18:54.854 { 00:18:54.854 "nsid": 2, 00:18:54.854 "bdev_name": "Malloc3", 00:18:54.854 "name": "Malloc3", 00:18:54.854 "nguid": "FF5579D7FADB4404B557EDA3FE7864E8", 00:18:54.854 "uuid": "ff5579d7-fadb-4404-b557-eda3fe7864e8" 00:18:54.854 } 00:18:54.854 ] 00:18:54.854 }, 00:18:54.854 { 00:18:54.854 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:54.854 "subtype": "NVMe", 00:18:54.854 "listen_addresses": [ 00:18:54.854 { 00:18:54.854 "trtype": "VFIOUSER", 00:18:54.854 "adrfam": "IPv4", 00:18:54.854 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:54.854 "trsvcid": "0" 00:18:54.854 } 00:18:54.854 ], 00:18:54.854 "allow_any_host": true, 00:18:54.855 "hosts": [], 00:18:54.855 "serial_number": "SPDK2", 00:18:54.855 "model_number": "SPDK bdev Controller", 00:18:54.855 "max_namespaces": 32, 00:18:54.855 "min_cntlid": 1, 00:18:54.855 "max_cntlid": 65519, 00:18:54.855 "namespaces": [ 00:18:54.855 { 00:18:54.855 "nsid": 1, 00:18:54.855 "bdev_name": "Malloc2", 00:18:54.855 "name": "Malloc2", 00:18:54.855 "nguid": "06B67F0D147C4C328363FD2C5FFEA5FB", 00:18:54.855 "uuid": "06b67f0d-147c-4c32-8363-fd2c5ffea5fb" 00:18:54.855 }, 00:18:54.855 { 00:18:54.855 "nsid": 2, 00:18:54.855 "bdev_name": "Malloc4", 00:18:54.855 "name": "Malloc4", 00:18:54.855 "nguid": "FF5053C15E61462C8D6F7025627225F5", 00:18:54.855 "uuid": "ff5053c1-5e61-462c-8d6f-7025627225f5" 00:18:54.855 } 00:18:54.855 ] 00:18:54.855 } 00:18:54.855 ] 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 896573 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 890374 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 890374 ']' 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 890374 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 890374 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 890374' 00:18:54.855 killing process with pid 890374 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 890374 00:18:54.855 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 890374 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=896717 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 896717' 00:18:55.424 Process pid: 896717 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 896717 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 896717 ']' 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.424 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.425 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.425 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:55.425 [2024-10-01 01:36:35.045438] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:55.425 [2024-10-01 01:36:35.046518] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:55.425 [2024-10-01 01:36:35.046579] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.425 [2024-10-01 01:36:35.109059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.425 [2024-10-01 01:36:35.198510] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.425 [2024-10-01 01:36:35.198577] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.425 [2024-10-01 01:36:35.198593] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.425 [2024-10-01 01:36:35.198610] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.425 [2024-10-01 01:36:35.198621] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.425 [2024-10-01 01:36:35.198708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.425 [2024-10-01 01:36:35.198778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.425 [2024-10-01 01:36:35.198876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.425 [2024-10-01 01:36:35.198879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.683 [2024-10-01 01:36:35.303403] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:55.683 [2024-10-01 01:36:35.303684] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:55.683 [2024-10-01 01:36:35.303990] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:55.683 [2024-10-01 01:36:35.304637] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:55.683 [2024-10-01 01:36:35.304898] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:55.683 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.683 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:55.683 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:56.619 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:56.877 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:56.877 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:56.877 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:56.877 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:56.877 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:57.445 Malloc1 00:18:57.446 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:57.703 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:57.961 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:58.219 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:58.219 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:58.219 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:58.476 Malloc2 00:18:58.476 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:58.733 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:58.991 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 896717 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 896717 ']' 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 896717 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896717 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896717' 00:18:59.249 killing process with pid 896717 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 896717 00:18:59.249 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 896717 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:59.507 00:18:59.507 real 0m53.303s 00:18:59.507 user 3m25.504s 00:18:59.507 sys 0m3.869s 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:59.507 ************************************ 00:18:59.507 END TEST nvmf_vfio_user 00:18:59.507 ************************************ 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.507 01:36:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.766 ************************************ 00:18:59.766 START TEST nvmf_vfio_user_nvme_compliance 00:18:59.766 ************************************ 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:59.766 * Looking for test storage... 00:18:59.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=897330 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 897330' 00:18:59.767 Process pid: 897330 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 897330 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 897330 ']' 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.767 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:59.767 [2024-10-01 01:36:39.565180] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:18:59.767 [2024-10-01 01:36:39.565260] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.026 [2024-10-01 01:36:39.624081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.026 [2024-10-01 01:36:39.715731] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.026 [2024-10-01 01:36:39.715789] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.026 [2024-10-01 01:36:39.715807] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.026 [2024-10-01 01:36:39.715821] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.026 [2024-10-01 01:36:39.715840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.026 [2024-10-01 01:36:39.715932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.026 [2024-10-01 01:36:39.716008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.026 [2024-10-01 01:36:39.716012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.026 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.026 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:00.026 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:01.401 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:01.401 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.402 malloc0 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.402 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:01.402 00:19:01.402 00:19:01.402 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.402 http://cunit.sourceforge.net/ 00:19:01.402 00:19:01.402 00:19:01.402 Suite: nvme_compliance 00:19:01.402 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 01:36:41.061550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.402 [2024-10-01 01:36:41.062994] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:01.402 [2024-10-01 01:36:41.063027] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:01.402 [2024-10-01 01:36:41.063041] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:01.402 [2024-10-01 01:36:41.064569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.402 passed 00:19:01.402 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 01:36:41.150158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.402 [2024-10-01 01:36:41.153178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.402 passed 00:19:01.402 Test: admin_identify_ns ...[2024-10-01 01:36:41.240342] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.661 [2024-10-01 01:36:41.300017] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:01.661 [2024-10-01 01:36:41.308018] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:01.661 [2024-10-01 01:36:41.332149] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.661 passed 00:19:01.661 Test: admin_get_features_mandatory_features ...[2024-10-01 01:36:41.414071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.661 [2024-10-01 01:36:41.417095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.661 passed 00:19:01.661 Test: admin_get_features_optional_features ...[2024-10-01 01:36:41.500641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.662 [2024-10-01 01:36:41.503662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.922 passed 00:19:01.922 Test: admin_set_features_number_of_queues ...[2024-10-01 01:36:41.589569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.922 [2024-10-01 01:36:41.697099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.922 passed 00:19:02.181 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 01:36:41.776711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.181 [2024-10-01 01:36:41.782743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.181 passed 00:19:02.181 Test: admin_get_log_page_with_lpo ...[2024-10-01 01:36:41.865946] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.181 [2024-10-01 01:36:41.938014] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:02.181 [2024-10-01 01:36:41.951098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.181 passed 00:19:02.181 Test: fabric_property_get ...[2024-10-01 01:36:42.033791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.181 [2024-10-01 01:36:42.035121] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:02.440 [2024-10-01 01:36:42.036809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.440 passed 00:19:02.440 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 01:36:42.121400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.440 [2024-10-01 01:36:42.122693] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:02.440 [2024-10-01 01:36:42.124418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.440 passed 00:19:02.440 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 01:36:42.211743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.698 [2024-10-01 01:36:42.295008] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.698 [2024-10-01 01:36:42.310039] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.698 [2024-10-01 01:36:42.315156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.698 passed 00:19:02.698 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 01:36:42.400182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.698 [2024-10-01 01:36:42.401507] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:02.698 [2024-10-01 01:36:42.403204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.698 passed 00:19:02.698 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 01:36:42.488674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.958 [2024-10-01 01:36:42.564009] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:02.958 [2024-10-01 01:36:42.588011] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:02.958 [2024-10-01 01:36:42.593128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.958 passed 00:19:02.958 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 01:36:42.679744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.958 [2024-10-01 01:36:42.681076] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:02.958 [2024-10-01 01:36:42.681133] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:02.958 [2024-10-01 01:36:42.682769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.958 passed 00:19:02.958 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 01:36:42.771140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.218 [2024-10-01 01:36:42.867006] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:03.218 [2024-10-01 01:36:42.875005] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:03.218 [2024-10-01 01:36:42.883009] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:03.218 [2024-10-01 01:36:42.891021] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:03.218 [2024-10-01 01:36:42.920112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.218 passed 00:19:03.218 Test: admin_create_io_sq_verify_pc ...[2024-10-01 01:36:43.004753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.218 [2024-10-01 01:36:43.021035] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:03.218 [2024-10-01 01:36:43.038131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.218 passed 00:19:03.478 Test: admin_create_io_qp_max_qps ...[2024-10-01 01:36:43.121713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.412 [2024-10-01 01:36:44.222013] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:04.983 [2024-10-01 01:36:44.614957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.983 passed 00:19:04.983 Test: admin_create_io_sq_shared_cq ...[2024-10-01 01:36:44.702325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.983 [2024-10-01 01:36:44.834020] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:05.261 [2024-10-01 01:36:44.871107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.261 passed 00:19:05.261 00:19:05.261 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.261 suites 1 1 n/a 0 0 00:19:05.261 tests 18 18 18 0 0 00:19:05.261 asserts 360 360 360 0 n/a 00:19:05.261 00:19:05.261 Elapsed time = 1.580 seconds 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 897330 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 897330 ']' 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 897330 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 897330 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 897330' 00:19:05.261 killing process with pid 897330 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 897330 00:19:05.261 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 897330 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:05.524 00:19:05.524 real 0m5.870s 00:19:05.524 user 0m16.347s 00:19:05.524 sys 0m0.557s 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:05.524 ************************************ 00:19:05.524 END TEST nvmf_vfio_user_nvme_compliance 00:19:05.524 ************************************ 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.524 ************************************ 00:19:05.524 START TEST nvmf_vfio_user_fuzz 00:19:05.524 ************************************ 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:05.524 * Looking for test storage... 00:19:05.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:05.524 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.782 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:05.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.783 --rc genhtml_branch_coverage=1 00:19:05.783 --rc genhtml_function_coverage=1 00:19:05.783 --rc genhtml_legend=1 00:19:05.783 --rc geninfo_all_blocks=1 00:19:05.783 --rc geninfo_unexecuted_blocks=1 00:19:05.783 00:19:05.783 ' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:05.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.783 --rc genhtml_branch_coverage=1 00:19:05.783 --rc genhtml_function_coverage=1 00:19:05.783 --rc genhtml_legend=1 00:19:05.783 --rc geninfo_all_blocks=1 00:19:05.783 --rc geninfo_unexecuted_blocks=1 00:19:05.783 00:19:05.783 ' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:05.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.783 --rc genhtml_branch_coverage=1 00:19:05.783 --rc genhtml_function_coverage=1 00:19:05.783 --rc genhtml_legend=1 00:19:05.783 --rc geninfo_all_blocks=1 00:19:05.783 --rc geninfo_unexecuted_blocks=1 00:19:05.783 00:19:05.783 ' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:05.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.783 --rc genhtml_branch_coverage=1 00:19:05.783 --rc genhtml_function_coverage=1 00:19:05.783 --rc genhtml_legend=1 00:19:05.783 --rc geninfo_all_blocks=1 00:19:05.783 --rc geninfo_unexecuted_blocks=1 00:19:05.783 00:19:05.783 ' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=898066 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 898066' 00:19:05.783 Process pid: 898066 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 898066 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 898066 ']' 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.783 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.043 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.043 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:06.043 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.983 malloc0 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.983 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:07.242 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:39.307 Fuzzing completed. Shutting down the fuzz application 00:19:39.307 00:19:39.307 Dumping successful admin opcodes: 00:19:39.307 8, 9, 10, 24, 00:19:39.307 Dumping successful io opcodes: 00:19:39.307 0, 00:19:39.307 NS: 0x200003a1ef00 I/O qp, Total commands completed: 597497, total successful commands: 2310, random_seed: 251147904 00:19:39.307 NS: 0x200003a1ef00 admin qp, Total commands completed: 76252, total successful commands: 593, random_seed: 3313805440 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 898066 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 898066 ']' 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 898066 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898066 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898066' 00:19:39.307 killing process with pid 898066 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 898066 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 898066 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:39.307 00:19:39.307 real 0m32.306s 00:19:39.307 user 0m31.691s 00:19:39.307 sys 0m29.070s 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:39.307 ************************************ 00:19:39.307 END TEST nvmf_vfio_user_fuzz 00:19:39.307 ************************************ 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.307 ************************************ 00:19:39.307 START TEST nvmf_auth_target 00:19:39.307 ************************************ 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:39.307 * Looking for test storage... 00:19:39.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.307 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.308 --rc genhtml_branch_coverage=1 00:19:39.308 --rc genhtml_function_coverage=1 00:19:39.308 --rc genhtml_legend=1 00:19:39.308 --rc geninfo_all_blocks=1 00:19:39.308 --rc geninfo_unexecuted_blocks=1 00:19:39.308 00:19:39.308 ' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.308 --rc genhtml_branch_coverage=1 00:19:39.308 --rc genhtml_function_coverage=1 00:19:39.308 --rc genhtml_legend=1 00:19:39.308 --rc geninfo_all_blocks=1 00:19:39.308 --rc geninfo_unexecuted_blocks=1 00:19:39.308 00:19:39.308 ' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.308 --rc genhtml_branch_coverage=1 00:19:39.308 --rc genhtml_function_coverage=1 00:19:39.308 --rc genhtml_legend=1 00:19:39.308 --rc geninfo_all_blocks=1 00:19:39.308 --rc geninfo_unexecuted_blocks=1 00:19:39.308 00:19:39.308 ' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:39.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.308 --rc genhtml_branch_coverage=1 00:19:39.308 --rc genhtml_function_coverage=1 00:19:39.308 --rc genhtml_legend=1 00:19:39.308 --rc geninfo_all_blocks=1 00:19:39.308 --rc geninfo_unexecuted_blocks=1 00:19:39.308 00:19:39.308 ' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.308 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.874 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:39.875 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:39.875 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:39.875 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:39.875 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.875 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.132 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.132 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.132 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.132 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.132 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:19:40.133 00:19:40.133 --- 10.0.0.2 ping statistics --- 00:19:40.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.133 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:19:40.133 00:19:40.133 --- 10.0.0.1 ping statistics --- 00:19:40.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.133 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=903505 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 903505 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 903505 ']' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.133 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=903529 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:40.391 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7ba5cbf93bb77351081dd100bbd30618c9294f88a13a618a 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.2J1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7ba5cbf93bb77351081dd100bbd30618c9294f88a13a618a 0 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7ba5cbf93bb77351081dd100bbd30618c9294f88a13a618a 0 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7ba5cbf93bb77351081dd100bbd30618c9294f88a13a618a 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.2J1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.2J1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.2J1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=8537ee7ddbbd1cb4d5ca12d062d2367cff2f02d44a8e82bda5c06e3b7ca24cac 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.o0d 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 8537ee7ddbbd1cb4d5ca12d062d2367cff2f02d44a8e82bda5c06e3b7ca24cac 3 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 8537ee7ddbbd1cb4d5ca12d062d2367cff2f02d44a8e82bda5c06e3b7ca24cac 3 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=8537ee7ddbbd1cb4d5ca12d062d2367cff2f02d44a8e82bda5c06e3b7ca24cac 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.o0d 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.o0d 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.o0d 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=391568b3381526f9048279108c4bf372 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.tcH 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 391568b3381526f9048279108c4bf372 1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 391568b3381526f9048279108c4bf372 1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=391568b3381526f9048279108c4bf372 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.tcH 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.tcH 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tcH 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.650 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b5325ffd905b814c05b58d18ecc9e84795155647828a8a47 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.cgh 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b5325ffd905b814c05b58d18ecc9e84795155647828a8a47 2 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b5325ffd905b814c05b58d18ecc9e84795155647828a8a47 2 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b5325ffd905b814c05b58d18ecc9e84795155647828a8a47 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.cgh 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.cgh 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cgh 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7d1c59a5abbe7ed0b45e30a4afa7fae483f028e8274c9bcf 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.iFZ 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7d1c59a5abbe7ed0b45e30a4afa7fae483f028e8274c9bcf 2 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7d1c59a5abbe7ed0b45e30a4afa7fae483f028e8274c9bcf 2 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7d1c59a5abbe7ed0b45e30a4afa7fae483f028e8274c9bcf 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.iFZ 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.iFZ 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.iFZ 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=3a3290f2093afe035776ca8bd05f7334 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.kjD 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 3a3290f2093afe035776ca8bd05f7334 1 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 3a3290f2093afe035776ca8bd05f7334 1 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=3a3290f2093afe035776ca8bd05f7334 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:19:40.651 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.kjD 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.kjD 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kjD 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:19:40.909 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a1f60404aea6bd0b22651b43643dcb739bb5ed64f892894cdabc57da281d03a9 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.nqD 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a1f60404aea6bd0b22651b43643dcb739bb5ed64f892894cdabc57da281d03a9 3 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a1f60404aea6bd0b22651b43643dcb739bb5ed64f892894cdabc57da281d03a9 3 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a1f60404aea6bd0b22651b43643dcb739bb5ed64f892894cdabc57da281d03a9 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.nqD 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.nqD 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.nqD 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 903505 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 903505 ']' 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.910 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.167 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.167 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:41.167 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 903529 /var/tmp/host.sock 00:19:41.168 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 903529 ']' 00:19:41.168 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:41.168 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.168 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:41.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:41.168 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.168 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2J1 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.2J1 00:19:41.425 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.2J1 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.o0d ]] 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o0d 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o0d 00:19:41.683 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o0d 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tcH 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tcH 00:19:41.941 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tcH 00:19:42.199 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cgh ]] 00:19:42.199 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cgh 00:19:42.199 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.199 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.199 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.199 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cgh 00:19:42.199 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cgh 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iFZ 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iFZ 00:19:42.457 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iFZ 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kjD ]] 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kjD 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kjD 00:19:42.714 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kjD 00:19:42.973 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:42.973 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.nqD 00:19:42.973 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.973 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.232 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.232 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.nqD 00:19:43.232 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.nqD 00:19:43.492 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:43.492 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:43.492 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.492 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.492 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.492 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.751 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.009 00:19:44.009 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.009 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.009 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.267 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.267 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.267 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.267 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.267 { 00:19:44.267 "cntlid": 1, 00:19:44.267 "qid": 0, 00:19:44.267 "state": "enabled", 00:19:44.267 "thread": "nvmf_tgt_poll_group_000", 00:19:44.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.267 "listen_address": { 00:19:44.267 "trtype": "TCP", 00:19:44.267 "adrfam": "IPv4", 00:19:44.267 "traddr": "10.0.0.2", 00:19:44.267 "trsvcid": "4420" 00:19:44.267 }, 00:19:44.267 "peer_address": { 00:19:44.267 "trtype": "TCP", 00:19:44.267 "adrfam": "IPv4", 00:19:44.267 "traddr": "10.0.0.1", 00:19:44.267 "trsvcid": "39806" 00:19:44.267 }, 00:19:44.267 "auth": { 00:19:44.267 "state": "completed", 00:19:44.267 "digest": "sha256", 00:19:44.267 "dhgroup": "null" 00:19:44.267 } 00:19:44.267 } 00:19:44.267 ]' 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.267 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.835 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:19:44.835 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.771 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.029 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.288 00:19:46.288 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.288 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.288 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.546 { 00:19:46.546 "cntlid": 3, 00:19:46.546 "qid": 0, 00:19:46.546 "state": "enabled", 00:19:46.546 "thread": "nvmf_tgt_poll_group_000", 00:19:46.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.546 "listen_address": { 00:19:46.546 "trtype": "TCP", 00:19:46.546 "adrfam": "IPv4", 00:19:46.546 "traddr": "10.0.0.2", 00:19:46.546 "trsvcid": "4420" 00:19:46.546 }, 00:19:46.546 "peer_address": { 00:19:46.546 "trtype": "TCP", 00:19:46.546 "adrfam": "IPv4", 00:19:46.546 "traddr": "10.0.0.1", 00:19:46.546 "trsvcid": "39832" 00:19:46.546 }, 00:19:46.546 "auth": { 00:19:46.546 "state": "completed", 00:19:46.546 "digest": "sha256", 00:19:46.546 "dhgroup": "null" 00:19:46.546 } 00:19:46.546 } 00:19:46.546 ]' 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.546 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.112 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:19:47.112 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.049 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.616 00:19:48.616 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.616 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.616 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.874 { 00:19:48.874 "cntlid": 5, 00:19:48.874 "qid": 0, 00:19:48.874 "state": "enabled", 00:19:48.874 "thread": "nvmf_tgt_poll_group_000", 00:19:48.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.874 "listen_address": { 00:19:48.874 "trtype": "TCP", 00:19:48.874 "adrfam": "IPv4", 00:19:48.874 "traddr": "10.0.0.2", 00:19:48.874 "trsvcid": "4420" 00:19:48.874 }, 00:19:48.874 "peer_address": { 00:19:48.874 "trtype": "TCP", 00:19:48.874 "adrfam": "IPv4", 00:19:48.874 "traddr": "10.0.0.1", 00:19:48.874 "trsvcid": "39856" 00:19:48.874 }, 00:19:48.874 "auth": { 00:19:48.874 "state": "completed", 00:19:48.874 "digest": "sha256", 00:19:48.874 "dhgroup": "null" 00:19:48.874 } 00:19:48.874 } 00:19:48.874 ]' 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.874 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.133 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:19:49.133 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.070 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.329 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.589 00:19:50.849 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.849 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.849 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.107 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.107 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.107 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.107 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.107 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.107 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.107 { 00:19:51.107 "cntlid": 7, 00:19:51.107 "qid": 0, 00:19:51.107 "state": "enabled", 00:19:51.107 "thread": "nvmf_tgt_poll_group_000", 00:19:51.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.107 "listen_address": { 00:19:51.107 "trtype": "TCP", 00:19:51.107 "adrfam": "IPv4", 00:19:51.107 "traddr": "10.0.0.2", 00:19:51.107 "trsvcid": "4420" 00:19:51.107 }, 00:19:51.107 "peer_address": { 00:19:51.107 "trtype": "TCP", 00:19:51.107 "adrfam": "IPv4", 00:19:51.107 "traddr": "10.0.0.1", 00:19:51.107 "trsvcid": "39880" 00:19:51.107 }, 00:19:51.107 "auth": { 00:19:51.107 "state": "completed", 00:19:51.107 "digest": "sha256", 00:19:51.108 "dhgroup": "null" 00:19:51.108 } 00:19:51.108 } 00:19:51.108 ]' 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.108 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.366 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:19:51.366 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.298 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.556 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.125 00:19:53.125 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.125 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.125 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.383 { 00:19:53.383 "cntlid": 9, 00:19:53.383 "qid": 0, 00:19:53.383 "state": "enabled", 00:19:53.383 "thread": "nvmf_tgt_poll_group_000", 00:19:53.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.383 "listen_address": { 00:19:53.383 "trtype": "TCP", 00:19:53.383 "adrfam": "IPv4", 00:19:53.383 "traddr": "10.0.0.2", 00:19:53.383 "trsvcid": "4420" 00:19:53.383 }, 00:19:53.383 "peer_address": { 00:19:53.383 "trtype": "TCP", 00:19:53.383 "adrfam": "IPv4", 00:19:53.383 "traddr": "10.0.0.1", 00:19:53.383 "trsvcid": "34354" 00:19:53.383 }, 00:19:53.383 "auth": { 00:19:53.383 "state": "completed", 00:19:53.383 "digest": "sha256", 00:19:53.383 "dhgroup": "ffdhe2048" 00:19:53.383 } 00:19:53.383 } 00:19:53.383 ]' 00:19:53.383 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.383 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.643 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:19:53.643 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.582 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.840 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.407 00:19:55.407 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.407 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.407 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.666 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.666 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.666 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.666 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.666 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.666 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.666 { 00:19:55.666 "cntlid": 11, 00:19:55.666 "qid": 0, 00:19:55.667 "state": "enabled", 00:19:55.667 "thread": "nvmf_tgt_poll_group_000", 00:19:55.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.667 "listen_address": { 00:19:55.667 "trtype": "TCP", 00:19:55.667 "adrfam": "IPv4", 00:19:55.667 "traddr": "10.0.0.2", 00:19:55.667 "trsvcid": "4420" 00:19:55.667 }, 00:19:55.667 "peer_address": { 00:19:55.667 "trtype": "TCP", 00:19:55.667 "adrfam": "IPv4", 00:19:55.667 "traddr": "10.0.0.1", 00:19:55.667 "trsvcid": "34392" 00:19:55.667 }, 00:19:55.667 "auth": { 00:19:55.667 "state": "completed", 00:19:55.667 "digest": "sha256", 00:19:55.667 "dhgroup": "ffdhe2048" 00:19:55.667 } 00:19:55.667 } 00:19:55.667 ]' 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.667 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.926 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:19:55.926 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:19:56.860 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.860 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.860 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.860 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.861 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.861 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.861 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.861 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.118 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.688 00:19:57.688 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.688 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.688 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.947 { 00:19:57.947 "cntlid": 13, 00:19:57.947 "qid": 0, 00:19:57.947 "state": "enabled", 00:19:57.947 "thread": "nvmf_tgt_poll_group_000", 00:19:57.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.947 "listen_address": { 00:19:57.947 "trtype": "TCP", 00:19:57.947 "adrfam": "IPv4", 00:19:57.947 "traddr": "10.0.0.2", 00:19:57.947 "trsvcid": "4420" 00:19:57.947 }, 00:19:57.947 "peer_address": { 00:19:57.947 "trtype": "TCP", 00:19:57.947 "adrfam": "IPv4", 00:19:57.947 "traddr": "10.0.0.1", 00:19:57.947 "trsvcid": "34426" 00:19:57.947 }, 00:19:57.947 "auth": { 00:19:57.947 "state": "completed", 00:19:57.947 "digest": "sha256", 00:19:57.947 "dhgroup": "ffdhe2048" 00:19:57.947 } 00:19:57.947 } 00:19:57.947 ]' 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.947 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.206 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:19:58.206 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.144 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.402 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.660 00:19:59.660 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.660 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.660 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.228 { 00:20:00.228 "cntlid": 15, 00:20:00.228 "qid": 0, 00:20:00.228 "state": "enabled", 00:20:00.228 "thread": "nvmf_tgt_poll_group_000", 00:20:00.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.228 "listen_address": { 00:20:00.228 "trtype": "TCP", 00:20:00.228 "adrfam": "IPv4", 00:20:00.228 "traddr": "10.0.0.2", 00:20:00.228 "trsvcid": "4420" 00:20:00.228 }, 00:20:00.228 "peer_address": { 00:20:00.228 "trtype": "TCP", 00:20:00.228 "adrfam": "IPv4", 00:20:00.228 "traddr": "10.0.0.1", 00:20:00.228 "trsvcid": "34462" 00:20:00.228 }, 00:20:00.228 "auth": { 00:20:00.228 "state": "completed", 00:20:00.228 "digest": "sha256", 00:20:00.228 "dhgroup": "ffdhe2048" 00:20:00.228 } 00:20:00.228 } 00:20:00.228 ]' 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.228 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.485 01:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:00.485 01:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.529 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.788 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.046 00:20:02.046 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.046 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.046 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.305 { 00:20:02.305 "cntlid": 17, 00:20:02.305 "qid": 0, 00:20:02.305 "state": "enabled", 00:20:02.305 "thread": "nvmf_tgt_poll_group_000", 00:20:02.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.305 "listen_address": { 00:20:02.305 "trtype": "TCP", 00:20:02.305 "adrfam": "IPv4", 00:20:02.305 "traddr": "10.0.0.2", 00:20:02.305 "trsvcid": "4420" 00:20:02.305 }, 00:20:02.305 "peer_address": { 00:20:02.305 "trtype": "TCP", 00:20:02.305 "adrfam": "IPv4", 00:20:02.305 "traddr": "10.0.0.1", 00:20:02.305 "trsvcid": "53768" 00:20:02.305 }, 00:20:02.305 "auth": { 00:20:02.305 "state": "completed", 00:20:02.305 "digest": "sha256", 00:20:02.305 "dhgroup": "ffdhe3072" 00:20:02.305 } 00:20:02.305 } 00:20:02.305 ]' 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.305 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.563 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.563 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.563 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.563 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.563 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.821 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:02.821 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.759 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.017 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.275 00:20:04.275 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.275 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.275 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.840 { 00:20:04.840 "cntlid": 19, 00:20:04.840 "qid": 0, 00:20:04.840 "state": "enabled", 00:20:04.840 "thread": "nvmf_tgt_poll_group_000", 00:20:04.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.840 "listen_address": { 00:20:04.840 "trtype": "TCP", 00:20:04.840 "adrfam": "IPv4", 00:20:04.840 "traddr": "10.0.0.2", 00:20:04.840 "trsvcid": "4420" 00:20:04.840 }, 00:20:04.840 "peer_address": { 00:20:04.840 "trtype": "TCP", 00:20:04.840 "adrfam": "IPv4", 00:20:04.840 "traddr": "10.0.0.1", 00:20:04.840 "trsvcid": "53798" 00:20:04.840 }, 00:20:04.840 "auth": { 00:20:04.840 "state": "completed", 00:20:04.840 "digest": "sha256", 00:20:04.840 "dhgroup": "ffdhe3072" 00:20:04.840 } 00:20:04.840 } 00:20:04.840 ]' 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.840 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.097 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:05.098 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.032 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.290 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.864 00:20:06.864 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.864 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.864 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.122 { 00:20:07.122 "cntlid": 21, 00:20:07.122 "qid": 0, 00:20:07.122 "state": "enabled", 00:20:07.122 "thread": "nvmf_tgt_poll_group_000", 00:20:07.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.122 "listen_address": { 00:20:07.122 "trtype": "TCP", 00:20:07.122 "adrfam": "IPv4", 00:20:07.122 "traddr": "10.0.0.2", 00:20:07.122 "trsvcid": "4420" 00:20:07.122 }, 00:20:07.122 "peer_address": { 00:20:07.122 "trtype": "TCP", 00:20:07.122 "adrfam": "IPv4", 00:20:07.122 "traddr": "10.0.0.1", 00:20:07.122 "trsvcid": "53814" 00:20:07.122 }, 00:20:07.122 "auth": { 00:20:07.122 "state": "completed", 00:20:07.122 "digest": "sha256", 00:20:07.122 "dhgroup": "ffdhe3072" 00:20:07.122 } 00:20:07.122 } 00:20:07.122 ]' 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.122 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.381 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:07.381 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.317 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.575 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.144 00:20:09.144 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.144 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.144 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.402 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.402 { 00:20:09.402 "cntlid": 23, 00:20:09.402 "qid": 0, 00:20:09.402 "state": "enabled", 00:20:09.402 "thread": "nvmf_tgt_poll_group_000", 00:20:09.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.402 "listen_address": { 00:20:09.402 "trtype": "TCP", 00:20:09.402 "adrfam": "IPv4", 00:20:09.402 "traddr": "10.0.0.2", 00:20:09.402 "trsvcid": "4420" 00:20:09.402 }, 00:20:09.402 "peer_address": { 00:20:09.402 "trtype": "TCP", 00:20:09.402 "adrfam": "IPv4", 00:20:09.402 "traddr": "10.0.0.1", 00:20:09.402 "trsvcid": "53832" 00:20:09.402 }, 00:20:09.402 "auth": { 00:20:09.402 "state": "completed", 00:20:09.402 "digest": "sha256", 00:20:09.402 "dhgroup": "ffdhe3072" 00:20:09.402 } 00:20:09.402 } 00:20:09.402 ]' 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.402 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.661 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:09.661 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.599 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.857 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.424 00:20:11.424 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.424 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.424 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.681 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.681 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.681 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.681 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.681 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.681 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.682 { 00:20:11.682 "cntlid": 25, 00:20:11.682 "qid": 0, 00:20:11.682 "state": "enabled", 00:20:11.682 "thread": "nvmf_tgt_poll_group_000", 00:20:11.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.682 "listen_address": { 00:20:11.682 "trtype": "TCP", 00:20:11.682 "adrfam": "IPv4", 00:20:11.682 "traddr": "10.0.0.2", 00:20:11.682 "trsvcid": "4420" 00:20:11.682 }, 00:20:11.682 "peer_address": { 00:20:11.682 "trtype": "TCP", 00:20:11.682 "adrfam": "IPv4", 00:20:11.682 "traddr": "10.0.0.1", 00:20:11.682 "trsvcid": "49042" 00:20:11.682 }, 00:20:11.682 "auth": { 00:20:11.682 "state": "completed", 00:20:11.682 "digest": "sha256", 00:20:11.682 "dhgroup": "ffdhe4096" 00:20:11.682 } 00:20:11.682 } 00:20:11.682 ]' 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.682 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.940 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:11.940 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:12.874 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.874 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.874 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.874 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.132 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.132 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.132 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.132 01:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.390 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.648 00:20:13.648 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.648 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.648 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.905 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.905 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.905 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.905 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.906 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.906 { 00:20:13.906 "cntlid": 27, 00:20:13.906 "qid": 0, 00:20:13.906 "state": "enabled", 00:20:13.906 "thread": "nvmf_tgt_poll_group_000", 00:20:13.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.906 "listen_address": { 00:20:13.906 "trtype": "TCP", 00:20:13.906 "adrfam": "IPv4", 00:20:13.906 "traddr": "10.0.0.2", 00:20:13.906 "trsvcid": "4420" 00:20:13.906 }, 00:20:13.906 "peer_address": { 00:20:13.906 "trtype": "TCP", 00:20:13.906 "adrfam": "IPv4", 00:20:13.906 "traddr": "10.0.0.1", 00:20:13.906 "trsvcid": "49080" 00:20:13.906 }, 00:20:13.906 "auth": { 00:20:13.906 "state": "completed", 00:20:13.906 "digest": "sha256", 00:20:13.906 "dhgroup": "ffdhe4096" 00:20:13.906 } 00:20:13.906 } 00:20:13.906 ]' 00:20:13.906 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.906 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.906 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.162 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.162 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.162 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.162 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.162 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.418 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:14.418 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.353 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.610 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.178 00:20:16.178 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.178 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.178 01:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.437 { 00:20:16.437 "cntlid": 29, 00:20:16.437 "qid": 0, 00:20:16.437 "state": "enabled", 00:20:16.437 "thread": "nvmf_tgt_poll_group_000", 00:20:16.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:16.437 "listen_address": { 00:20:16.437 "trtype": "TCP", 00:20:16.437 "adrfam": "IPv4", 00:20:16.437 "traddr": "10.0.0.2", 00:20:16.437 "trsvcid": "4420" 00:20:16.437 }, 00:20:16.437 "peer_address": { 00:20:16.437 "trtype": "TCP", 00:20:16.437 "adrfam": "IPv4", 00:20:16.437 "traddr": "10.0.0.1", 00:20:16.437 "trsvcid": "49112" 00:20:16.437 }, 00:20:16.437 "auth": { 00:20:16.437 "state": "completed", 00:20:16.437 "digest": "sha256", 00:20:16.437 "dhgroup": "ffdhe4096" 00:20:16.437 } 00:20:16.437 } 00:20:16.437 ]' 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.437 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.696 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:16.696 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.628 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.887 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.454 00:20:18.454 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.454 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.454 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.711 { 00:20:18.711 "cntlid": 31, 00:20:18.711 "qid": 0, 00:20:18.711 "state": "enabled", 00:20:18.711 "thread": "nvmf_tgt_poll_group_000", 00:20:18.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.711 "listen_address": { 00:20:18.711 "trtype": "TCP", 00:20:18.711 "adrfam": "IPv4", 00:20:18.711 "traddr": "10.0.0.2", 00:20:18.711 "trsvcid": "4420" 00:20:18.711 }, 00:20:18.711 "peer_address": { 00:20:18.711 "trtype": "TCP", 00:20:18.711 "adrfam": "IPv4", 00:20:18.711 "traddr": "10.0.0.1", 00:20:18.711 "trsvcid": "49148" 00:20:18.711 }, 00:20:18.711 "auth": { 00:20:18.711 "state": "completed", 00:20:18.711 "digest": "sha256", 00:20:18.711 "dhgroup": "ffdhe4096" 00:20:18.711 } 00:20:18.711 } 00:20:18.711 ]' 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.711 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.969 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:18.969 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:19.904 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.162 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.420 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.984 00:20:20.984 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.984 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.984 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.242 { 00:20:21.242 "cntlid": 33, 00:20:21.242 "qid": 0, 00:20:21.242 "state": "enabled", 00:20:21.242 "thread": "nvmf_tgt_poll_group_000", 00:20:21.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.242 "listen_address": { 00:20:21.242 "trtype": "TCP", 00:20:21.242 "adrfam": "IPv4", 00:20:21.242 "traddr": "10.0.0.2", 00:20:21.242 "trsvcid": "4420" 00:20:21.242 }, 00:20:21.242 "peer_address": { 00:20:21.242 "trtype": "TCP", 00:20:21.242 "adrfam": "IPv4", 00:20:21.242 "traddr": "10.0.0.1", 00:20:21.242 "trsvcid": "49182" 00:20:21.242 }, 00:20:21.242 "auth": { 00:20:21.242 "state": "completed", 00:20:21.242 "digest": "sha256", 00:20:21.242 "dhgroup": "ffdhe6144" 00:20:21.242 } 00:20:21.242 } 00:20:21.242 ]' 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.242 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.242 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.242 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.242 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.499 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:21.499 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.431 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.688 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.946 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.946 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.946 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.946 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.511 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.511 { 00:20:23.511 "cntlid": 35, 00:20:23.511 "qid": 0, 00:20:23.511 "state": "enabled", 00:20:23.511 "thread": "nvmf_tgt_poll_group_000", 00:20:23.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.511 "listen_address": { 00:20:23.511 "trtype": "TCP", 00:20:23.511 "adrfam": "IPv4", 00:20:23.511 "traddr": "10.0.0.2", 00:20:23.511 "trsvcid": "4420" 00:20:23.511 }, 00:20:23.511 "peer_address": { 00:20:23.511 "trtype": "TCP", 00:20:23.511 "adrfam": "IPv4", 00:20:23.511 "traddr": "10.0.0.1", 00:20:23.511 "trsvcid": "42220" 00:20:23.511 }, 00:20:23.511 "auth": { 00:20:23.511 "state": "completed", 00:20:23.511 "digest": "sha256", 00:20:23.511 "dhgroup": "ffdhe6144" 00:20:23.511 } 00:20:23.511 } 00:20:23.511 ]' 00:20:23.511 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.769 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.026 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:24.026 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.958 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.524 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.088 00:20:26.088 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.088 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.088 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.345 { 00:20:26.345 "cntlid": 37, 00:20:26.345 "qid": 0, 00:20:26.345 "state": "enabled", 00:20:26.345 "thread": "nvmf_tgt_poll_group_000", 00:20:26.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.345 "listen_address": { 00:20:26.345 "trtype": "TCP", 00:20:26.345 "adrfam": "IPv4", 00:20:26.345 "traddr": "10.0.0.2", 00:20:26.345 "trsvcid": "4420" 00:20:26.345 }, 00:20:26.345 "peer_address": { 00:20:26.345 "trtype": "TCP", 00:20:26.345 "adrfam": "IPv4", 00:20:26.345 "traddr": "10.0.0.1", 00:20:26.345 "trsvcid": "42246" 00:20:26.345 }, 00:20:26.345 "auth": { 00:20:26.345 "state": "completed", 00:20:26.345 "digest": "sha256", 00:20:26.345 "dhgroup": "ffdhe6144" 00:20:26.345 } 00:20:26.345 } 00:20:26.345 ]' 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.345 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.601 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:26.601 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.532 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.789 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.354 00:20:28.354 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.354 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.354 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.611 { 00:20:28.611 "cntlid": 39, 00:20:28.611 "qid": 0, 00:20:28.611 "state": "enabled", 00:20:28.611 "thread": "nvmf_tgt_poll_group_000", 00:20:28.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.611 "listen_address": { 00:20:28.611 "trtype": "TCP", 00:20:28.611 "adrfam": "IPv4", 00:20:28.611 "traddr": "10.0.0.2", 00:20:28.611 "trsvcid": "4420" 00:20:28.611 }, 00:20:28.611 "peer_address": { 00:20:28.611 "trtype": "TCP", 00:20:28.611 "adrfam": "IPv4", 00:20:28.611 "traddr": "10.0.0.1", 00:20:28.611 "trsvcid": "42262" 00:20:28.611 }, 00:20:28.611 "auth": { 00:20:28.611 "state": "completed", 00:20:28.611 "digest": "sha256", 00:20:28.611 "dhgroup": "ffdhe6144" 00:20:28.611 } 00:20:28.611 } 00:20:28.611 ]' 00:20:28.611 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.868 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.124 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:29.124 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.055 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.619 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.589 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.589 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.589 { 00:20:31.589 "cntlid": 41, 00:20:31.589 "qid": 0, 00:20:31.589 "state": "enabled", 00:20:31.589 "thread": "nvmf_tgt_poll_group_000", 00:20:31.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.589 "listen_address": { 00:20:31.589 "trtype": "TCP", 00:20:31.589 "adrfam": "IPv4", 00:20:31.589 "traddr": "10.0.0.2", 00:20:31.589 "trsvcid": "4420" 00:20:31.590 }, 00:20:31.590 "peer_address": { 00:20:31.590 "trtype": "TCP", 00:20:31.590 "adrfam": "IPv4", 00:20:31.590 "traddr": "10.0.0.1", 00:20:31.590 "trsvcid": "52012" 00:20:31.590 }, 00:20:31.590 "auth": { 00:20:31.590 "state": "completed", 00:20:31.590 "digest": "sha256", 00:20:31.590 "dhgroup": "ffdhe8192" 00:20:31.590 } 00:20:31.590 } 00:20:31.590 ]' 00:20:31.590 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.873 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.130 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:32.130 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:33.059 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.059 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.059 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.060 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.060 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.060 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.060 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.060 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.317 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.250 00:20:34.250 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.250 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.250 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.508 { 00:20:34.508 "cntlid": 43, 00:20:34.508 "qid": 0, 00:20:34.508 "state": "enabled", 00:20:34.508 "thread": "nvmf_tgt_poll_group_000", 00:20:34.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.508 "listen_address": { 00:20:34.508 "trtype": "TCP", 00:20:34.508 "adrfam": "IPv4", 00:20:34.508 "traddr": "10.0.0.2", 00:20:34.508 "trsvcid": "4420" 00:20:34.508 }, 00:20:34.508 "peer_address": { 00:20:34.508 "trtype": "TCP", 00:20:34.508 "adrfam": "IPv4", 00:20:34.508 "traddr": "10.0.0.1", 00:20:34.508 "trsvcid": "52030" 00:20:34.508 }, 00:20:34.508 "auth": { 00:20:34.508 "state": "completed", 00:20:34.508 "digest": "sha256", 00:20:34.508 "dhgroup": "ffdhe8192" 00:20:34.508 } 00:20:34.508 } 00:20:34.508 ]' 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.508 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.766 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.766 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.766 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.766 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.766 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.024 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:35.024 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.957 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.215 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.147 00:20:37.147 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.147 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.147 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.405 { 00:20:37.405 "cntlid": 45, 00:20:37.405 "qid": 0, 00:20:37.405 "state": "enabled", 00:20:37.405 "thread": "nvmf_tgt_poll_group_000", 00:20:37.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.405 "listen_address": { 00:20:37.405 "trtype": "TCP", 00:20:37.405 "adrfam": "IPv4", 00:20:37.405 "traddr": "10.0.0.2", 00:20:37.405 "trsvcid": "4420" 00:20:37.405 }, 00:20:37.405 "peer_address": { 00:20:37.405 "trtype": "TCP", 00:20:37.405 "adrfam": "IPv4", 00:20:37.405 "traddr": "10.0.0.1", 00:20:37.405 "trsvcid": "52050" 00:20:37.405 }, 00:20:37.405 "auth": { 00:20:37.405 "state": "completed", 00:20:37.405 "digest": "sha256", 00:20:37.405 "dhgroup": "ffdhe8192" 00:20:37.405 } 00:20:37.405 } 00:20:37.405 ]' 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.405 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.662 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:37.662 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.595 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.852 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:38.852 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.852 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.852 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.852 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.852 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.853 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.784 00:20:39.784 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.784 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.784 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.041 { 00:20:40.041 "cntlid": 47, 00:20:40.041 "qid": 0, 00:20:40.041 "state": "enabled", 00:20:40.041 "thread": "nvmf_tgt_poll_group_000", 00:20:40.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.041 "listen_address": { 00:20:40.041 "trtype": "TCP", 00:20:40.041 "adrfam": "IPv4", 00:20:40.041 "traddr": "10.0.0.2", 00:20:40.041 "trsvcid": "4420" 00:20:40.041 }, 00:20:40.041 "peer_address": { 00:20:40.041 "trtype": "TCP", 00:20:40.041 "adrfam": "IPv4", 00:20:40.041 "traddr": "10.0.0.1", 00:20:40.041 "trsvcid": "52072" 00:20:40.041 }, 00:20:40.041 "auth": { 00:20:40.041 "state": "completed", 00:20:40.041 "digest": "sha256", 00:20:40.041 "dhgroup": "ffdhe8192" 00:20:40.041 } 00:20:40.041 } 00:20:40.041 ]' 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.041 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.042 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.300 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.300 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.300 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.300 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.300 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:40.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:41.491 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.749 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.314 00:20:42.314 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.314 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.314 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.572 { 00:20:42.572 "cntlid": 49, 00:20:42.572 "qid": 0, 00:20:42.572 "state": "enabled", 00:20:42.572 "thread": "nvmf_tgt_poll_group_000", 00:20:42.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.572 "listen_address": { 00:20:42.572 "trtype": "TCP", 00:20:42.572 "adrfam": "IPv4", 00:20:42.572 "traddr": "10.0.0.2", 00:20:42.572 "trsvcid": "4420" 00:20:42.572 }, 00:20:42.572 "peer_address": { 00:20:42.572 "trtype": "TCP", 00:20:42.572 "adrfam": "IPv4", 00:20:42.572 "traddr": "10.0.0.1", 00:20:42.572 "trsvcid": "57474" 00:20:42.572 }, 00:20:42.572 "auth": { 00:20:42.572 "state": "completed", 00:20:42.572 "digest": "sha384", 00:20:42.572 "dhgroup": "null" 00:20:42.572 } 00:20:42.572 } 00:20:42.572 ]' 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.572 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.829 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:42.830 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.763 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.021 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.586 00:20:44.586 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.586 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.586 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.844 { 00:20:44.844 "cntlid": 51, 00:20:44.844 "qid": 0, 00:20:44.844 "state": "enabled", 00:20:44.844 "thread": "nvmf_tgt_poll_group_000", 00:20:44.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.844 "listen_address": { 00:20:44.844 "trtype": "TCP", 00:20:44.844 "adrfam": "IPv4", 00:20:44.844 "traddr": "10.0.0.2", 00:20:44.844 "trsvcid": "4420" 00:20:44.844 }, 00:20:44.844 "peer_address": { 00:20:44.844 "trtype": "TCP", 00:20:44.844 "adrfam": "IPv4", 00:20:44.844 "traddr": "10.0.0.1", 00:20:44.844 "trsvcid": "57512" 00:20:44.844 }, 00:20:44.844 "auth": { 00:20:44.844 "state": "completed", 00:20:44.844 "digest": "sha384", 00:20:44.844 "dhgroup": "null" 00:20:44.844 } 00:20:44.844 } 00:20:44.844 ]' 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.844 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.102 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:45.102 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:46.034 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.035 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.292 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.857 00:20:46.857 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.857 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.857 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.114 { 00:20:47.114 "cntlid": 53, 00:20:47.114 "qid": 0, 00:20:47.114 "state": "enabled", 00:20:47.114 "thread": "nvmf_tgt_poll_group_000", 00:20:47.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.114 "listen_address": { 00:20:47.114 "trtype": "TCP", 00:20:47.114 "adrfam": "IPv4", 00:20:47.114 "traddr": "10.0.0.2", 00:20:47.114 "trsvcid": "4420" 00:20:47.114 }, 00:20:47.114 "peer_address": { 00:20:47.114 "trtype": "TCP", 00:20:47.114 "adrfam": "IPv4", 00:20:47.114 "traddr": "10.0.0.1", 00:20:47.114 "trsvcid": "57544" 00:20:47.114 }, 00:20:47.114 "auth": { 00:20:47.114 "state": "completed", 00:20:47.114 "digest": "sha384", 00:20:47.114 "dhgroup": "null" 00:20:47.114 } 00:20:47.114 } 00:20:47.114 ]' 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.114 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.372 01:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:47.372 01:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.744 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.001 00:20:49.001 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.001 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.001 01:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.259 { 00:20:49.259 "cntlid": 55, 00:20:49.259 "qid": 0, 00:20:49.259 "state": "enabled", 00:20:49.259 "thread": "nvmf_tgt_poll_group_000", 00:20:49.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.259 "listen_address": { 00:20:49.259 "trtype": "TCP", 00:20:49.259 "adrfam": "IPv4", 00:20:49.259 "traddr": "10.0.0.2", 00:20:49.259 "trsvcid": "4420" 00:20:49.259 }, 00:20:49.259 "peer_address": { 00:20:49.259 "trtype": "TCP", 00:20:49.259 "adrfam": "IPv4", 00:20:49.259 "traddr": "10.0.0.1", 00:20:49.259 "trsvcid": "57580" 00:20:49.259 }, 00:20:49.259 "auth": { 00:20:49.259 "state": "completed", 00:20:49.259 "digest": "sha384", 00:20:49.259 "dhgroup": "null" 00:20:49.259 } 00:20:49.259 } 00:20:49.259 ]' 00:20:49.259 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.516 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.773 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:49.773 01:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:50.704 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.705 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.270 01:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.528 00:20:51.528 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.528 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.528 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.786 { 00:20:51.786 "cntlid": 57, 00:20:51.786 "qid": 0, 00:20:51.786 "state": "enabled", 00:20:51.786 "thread": "nvmf_tgt_poll_group_000", 00:20:51.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.786 "listen_address": { 00:20:51.786 "trtype": "TCP", 00:20:51.786 "adrfam": "IPv4", 00:20:51.786 "traddr": "10.0.0.2", 00:20:51.786 "trsvcid": "4420" 00:20:51.786 }, 00:20:51.786 "peer_address": { 00:20:51.786 "trtype": "TCP", 00:20:51.786 "adrfam": "IPv4", 00:20:51.786 "traddr": "10.0.0.1", 00:20:51.786 "trsvcid": "40910" 00:20:51.786 }, 00:20:51.786 "auth": { 00:20:51.786 "state": "completed", 00:20:51.786 "digest": "sha384", 00:20:51.786 "dhgroup": "ffdhe2048" 00:20:51.786 } 00:20:51.786 } 00:20:51.786 ]' 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.786 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.043 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:52.044 01:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.975 01:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.232 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.233 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.233 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.233 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.796 00:20:53.796 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.796 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.796 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.053 { 00:20:54.053 "cntlid": 59, 00:20:54.053 "qid": 0, 00:20:54.053 "state": "enabled", 00:20:54.053 "thread": "nvmf_tgt_poll_group_000", 00:20:54.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.053 "listen_address": { 00:20:54.053 "trtype": "TCP", 00:20:54.053 "adrfam": "IPv4", 00:20:54.053 "traddr": "10.0.0.2", 00:20:54.053 "trsvcid": "4420" 00:20:54.053 }, 00:20:54.053 "peer_address": { 00:20:54.053 "trtype": "TCP", 00:20:54.053 "adrfam": "IPv4", 00:20:54.053 "traddr": "10.0.0.1", 00:20:54.053 "trsvcid": "40942" 00:20:54.053 }, 00:20:54.053 "auth": { 00:20:54.053 "state": "completed", 00:20:54.053 "digest": "sha384", 00:20:54.053 "dhgroup": "ffdhe2048" 00:20:54.053 } 00:20:54.053 } 00:20:54.053 ]' 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.053 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.054 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.054 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.054 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.054 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.054 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.054 01:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.311 01:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:54.311 01:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.242 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.499 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.064 00:20:56.064 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.064 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.064 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.321 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.321 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.321 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.321 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.321 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.321 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.321 { 00:20:56.321 "cntlid": 61, 00:20:56.321 "qid": 0, 00:20:56.321 "state": "enabled", 00:20:56.321 "thread": "nvmf_tgt_poll_group_000", 00:20:56.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.321 "listen_address": { 00:20:56.321 "trtype": "TCP", 00:20:56.321 "adrfam": "IPv4", 00:20:56.321 "traddr": "10.0.0.2", 00:20:56.321 "trsvcid": "4420" 00:20:56.321 }, 00:20:56.321 "peer_address": { 00:20:56.321 "trtype": "TCP", 00:20:56.321 "adrfam": "IPv4", 00:20:56.321 "traddr": "10.0.0.1", 00:20:56.322 "trsvcid": "40960" 00:20:56.322 }, 00:20:56.322 "auth": { 00:20:56.322 "state": "completed", 00:20:56.322 "digest": "sha384", 00:20:56.322 "dhgroup": "ffdhe2048" 00:20:56.322 } 00:20:56.322 } 00:20:56.322 ]' 00:20:56.322 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.322 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.322 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.322 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.322 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.322 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.322 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.322 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.579 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:56.579 01:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.512 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.078 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.336 00:20:58.336 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.336 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.336 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.594 { 00:20:58.594 "cntlid": 63, 00:20:58.594 "qid": 0, 00:20:58.594 "state": "enabled", 00:20:58.594 "thread": "nvmf_tgt_poll_group_000", 00:20:58.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.594 "listen_address": { 00:20:58.594 "trtype": "TCP", 00:20:58.594 "adrfam": "IPv4", 00:20:58.594 "traddr": "10.0.0.2", 00:20:58.594 "trsvcid": "4420" 00:20:58.594 }, 00:20:58.594 "peer_address": { 00:20:58.594 "trtype": "TCP", 00:20:58.594 "adrfam": "IPv4", 00:20:58.594 "traddr": "10.0.0.1", 00:20:58.594 "trsvcid": "40988" 00:20:58.594 }, 00:20:58.594 "auth": { 00:20:58.594 "state": "completed", 00:20:58.594 "digest": "sha384", 00:20:58.594 "dhgroup": "ffdhe2048" 00:20:58.594 } 00:20:58.594 } 00:20:58.594 ]' 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.594 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.851 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:58.851 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.783 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.041 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.606 00:21:00.606 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.606 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.606 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.864 { 00:21:00.864 "cntlid": 65, 00:21:00.864 "qid": 0, 00:21:00.864 "state": "enabled", 00:21:00.864 "thread": "nvmf_tgt_poll_group_000", 00:21:00.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.864 "listen_address": { 00:21:00.864 "trtype": "TCP", 00:21:00.864 "adrfam": "IPv4", 00:21:00.864 "traddr": "10.0.0.2", 00:21:00.864 "trsvcid": "4420" 00:21:00.864 }, 00:21:00.864 "peer_address": { 00:21:00.864 "trtype": "TCP", 00:21:00.864 "adrfam": "IPv4", 00:21:00.864 "traddr": "10.0.0.1", 00:21:00.864 "trsvcid": "41008" 00:21:00.864 }, 00:21:00.864 "auth": { 00:21:00.864 "state": "completed", 00:21:00.864 "digest": "sha384", 00:21:00.864 "dhgroup": "ffdhe3072" 00:21:00.864 } 00:21:00.864 } 00:21:00.864 ]' 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.864 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.121 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:01.121 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.079 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.367 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.937 00:21:02.937 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.937 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.937 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.195 { 00:21:03.195 "cntlid": 67, 00:21:03.195 "qid": 0, 00:21:03.195 "state": "enabled", 00:21:03.195 "thread": "nvmf_tgt_poll_group_000", 00:21:03.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.195 "listen_address": { 00:21:03.195 "trtype": "TCP", 00:21:03.195 "adrfam": "IPv4", 00:21:03.195 "traddr": "10.0.0.2", 00:21:03.195 "trsvcid": "4420" 00:21:03.195 }, 00:21:03.195 "peer_address": { 00:21:03.195 "trtype": "TCP", 00:21:03.195 "adrfam": "IPv4", 00:21:03.195 "traddr": "10.0.0.1", 00:21:03.195 "trsvcid": "60076" 00:21:03.195 }, 00:21:03.195 "auth": { 00:21:03.195 "state": "completed", 00:21:03.195 "digest": "sha384", 00:21:03.195 "dhgroup": "ffdhe3072" 00:21:03.195 } 00:21:03.195 } 00:21:03.195 ]' 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.195 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.452 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:03.453 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.384 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.641 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.642 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.642 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.215 00:21:05.215 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.215 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.215 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.472 { 00:21:05.472 "cntlid": 69, 00:21:05.472 "qid": 0, 00:21:05.472 "state": "enabled", 00:21:05.472 "thread": "nvmf_tgt_poll_group_000", 00:21:05.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.472 "listen_address": { 00:21:05.472 "trtype": "TCP", 00:21:05.472 "adrfam": "IPv4", 00:21:05.472 "traddr": "10.0.0.2", 00:21:05.472 "trsvcid": "4420" 00:21:05.472 }, 00:21:05.472 "peer_address": { 00:21:05.472 "trtype": "TCP", 00:21:05.472 "adrfam": "IPv4", 00:21:05.472 "traddr": "10.0.0.1", 00:21:05.472 "trsvcid": "60116" 00:21:05.472 }, 00:21:05.472 "auth": { 00:21:05.472 "state": "completed", 00:21:05.472 "digest": "sha384", 00:21:05.472 "dhgroup": "ffdhe3072" 00:21:05.472 } 00:21:05.472 } 00:21:05.472 ]' 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.472 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.728 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:05.729 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.660 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.918 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.483 00:21:07.483 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.483 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.483 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.741 { 00:21:07.741 "cntlid": 71, 00:21:07.741 "qid": 0, 00:21:07.741 "state": "enabled", 00:21:07.741 "thread": "nvmf_tgt_poll_group_000", 00:21:07.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.741 "listen_address": { 00:21:07.741 "trtype": "TCP", 00:21:07.741 "adrfam": "IPv4", 00:21:07.741 "traddr": "10.0.0.2", 00:21:07.741 "trsvcid": "4420" 00:21:07.741 }, 00:21:07.741 "peer_address": { 00:21:07.741 "trtype": "TCP", 00:21:07.741 "adrfam": "IPv4", 00:21:07.741 "traddr": "10.0.0.1", 00:21:07.741 "trsvcid": "60142" 00:21:07.741 }, 00:21:07.741 "auth": { 00:21:07.741 "state": "completed", 00:21:07.741 "digest": "sha384", 00:21:07.741 "dhgroup": "ffdhe3072" 00:21:07.741 } 00:21:07.741 } 00:21:07.741 ]' 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.741 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.001 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:08.001 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:08.932 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.932 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.932 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.932 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.932 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.933 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.933 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.933 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.933 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.190 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.191 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.756 00:21:09.756 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.756 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.756 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.014 { 00:21:10.014 "cntlid": 73, 00:21:10.014 "qid": 0, 00:21:10.014 "state": "enabled", 00:21:10.014 "thread": "nvmf_tgt_poll_group_000", 00:21:10.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:10.014 "listen_address": { 00:21:10.014 "trtype": "TCP", 00:21:10.014 "adrfam": "IPv4", 00:21:10.014 "traddr": "10.0.0.2", 00:21:10.014 "trsvcid": "4420" 00:21:10.014 }, 00:21:10.014 "peer_address": { 00:21:10.014 "trtype": "TCP", 00:21:10.014 "adrfam": "IPv4", 00:21:10.014 "traddr": "10.0.0.1", 00:21:10.014 "trsvcid": "60180" 00:21:10.014 }, 00:21:10.014 "auth": { 00:21:10.014 "state": "completed", 00:21:10.014 "digest": "sha384", 00:21:10.014 "dhgroup": "ffdhe4096" 00:21:10.014 } 00:21:10.014 } 00:21:10.014 ]' 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.014 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.579 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:10.579 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.510 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.767 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.768 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.024 00:21:12.024 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.024 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.024 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.282 { 00:21:12.282 "cntlid": 75, 00:21:12.282 "qid": 0, 00:21:12.282 "state": "enabled", 00:21:12.282 "thread": "nvmf_tgt_poll_group_000", 00:21:12.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.282 "listen_address": { 00:21:12.282 "trtype": "TCP", 00:21:12.282 "adrfam": "IPv4", 00:21:12.282 "traddr": "10.0.0.2", 00:21:12.282 "trsvcid": "4420" 00:21:12.282 }, 00:21:12.282 "peer_address": { 00:21:12.282 "trtype": "TCP", 00:21:12.282 "adrfam": "IPv4", 00:21:12.282 "traddr": "10.0.0.1", 00:21:12.282 "trsvcid": "37302" 00:21:12.282 }, 00:21:12.282 "auth": { 00:21:12.282 "state": "completed", 00:21:12.282 "digest": "sha384", 00:21:12.282 "dhgroup": "ffdhe4096" 00:21:12.282 } 00:21:12.282 } 00:21:12.282 ]' 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.282 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.539 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.539 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.539 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.539 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.539 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.797 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:12.797 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.728 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.985 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.550 00:21:14.550 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.550 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.550 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.808 { 00:21:14.808 "cntlid": 77, 00:21:14.808 "qid": 0, 00:21:14.808 "state": "enabled", 00:21:14.808 "thread": "nvmf_tgt_poll_group_000", 00:21:14.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.808 "listen_address": { 00:21:14.808 "trtype": "TCP", 00:21:14.808 "adrfam": "IPv4", 00:21:14.808 "traddr": "10.0.0.2", 00:21:14.808 "trsvcid": "4420" 00:21:14.808 }, 00:21:14.808 "peer_address": { 00:21:14.808 "trtype": "TCP", 00:21:14.808 "adrfam": "IPv4", 00:21:14.808 "traddr": "10.0.0.1", 00:21:14.808 "trsvcid": "37326" 00:21:14.808 }, 00:21:14.808 "auth": { 00:21:14.808 "state": "completed", 00:21:14.808 "digest": "sha384", 00:21:14.808 "dhgroup": "ffdhe4096" 00:21:14.808 } 00:21:14.808 } 00:21:14.808 ]' 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.808 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.066 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:15.066 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:15.997 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.997 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.997 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.997 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.255 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.255 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.255 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.255 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.512 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:16.512 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.512 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.513 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.770 00:21:16.770 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.770 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.770 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.027 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.028 { 00:21:17.028 "cntlid": 79, 00:21:17.028 "qid": 0, 00:21:17.028 "state": "enabled", 00:21:17.028 "thread": "nvmf_tgt_poll_group_000", 00:21:17.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.028 "listen_address": { 00:21:17.028 "trtype": "TCP", 00:21:17.028 "adrfam": "IPv4", 00:21:17.028 "traddr": "10.0.0.2", 00:21:17.028 "trsvcid": "4420" 00:21:17.028 }, 00:21:17.028 "peer_address": { 00:21:17.028 "trtype": "TCP", 00:21:17.028 "adrfam": "IPv4", 00:21:17.028 "traddr": "10.0.0.1", 00:21:17.028 "trsvcid": "37360" 00:21:17.028 }, 00:21:17.028 "auth": { 00:21:17.028 "state": "completed", 00:21:17.028 "digest": "sha384", 00:21:17.028 "dhgroup": "ffdhe4096" 00:21:17.028 } 00:21:17.028 } 00:21:17.028 ]' 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.028 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.285 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.285 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.285 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.285 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.285 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.543 01:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:17.543 01:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.474 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.732 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.296 00:21:19.296 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.296 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.296 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.554 { 00:21:19.554 "cntlid": 81, 00:21:19.554 "qid": 0, 00:21:19.554 "state": "enabled", 00:21:19.554 "thread": "nvmf_tgt_poll_group_000", 00:21:19.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.554 "listen_address": { 00:21:19.554 "trtype": "TCP", 00:21:19.554 "adrfam": "IPv4", 00:21:19.554 "traddr": "10.0.0.2", 00:21:19.554 "trsvcid": "4420" 00:21:19.554 }, 00:21:19.554 "peer_address": { 00:21:19.554 "trtype": "TCP", 00:21:19.554 "adrfam": "IPv4", 00:21:19.554 "traddr": "10.0.0.1", 00:21:19.554 "trsvcid": "37388" 00:21:19.554 }, 00:21:19.554 "auth": { 00:21:19.554 "state": "completed", 00:21:19.554 "digest": "sha384", 00:21:19.554 "dhgroup": "ffdhe6144" 00:21:19.554 } 00:21:19.554 } 00:21:19.554 ]' 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.554 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.811 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.812 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.812 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.070 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:20.070 01:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.004 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.262 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.263 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.829 00:21:21.829 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.829 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.829 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.088 { 00:21:22.088 "cntlid": 83, 00:21:22.088 "qid": 0, 00:21:22.088 "state": "enabled", 00:21:22.088 "thread": "nvmf_tgt_poll_group_000", 00:21:22.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.088 "listen_address": { 00:21:22.088 "trtype": "TCP", 00:21:22.088 "adrfam": "IPv4", 00:21:22.088 "traddr": "10.0.0.2", 00:21:22.088 "trsvcid": "4420" 00:21:22.088 }, 00:21:22.088 "peer_address": { 00:21:22.088 "trtype": "TCP", 00:21:22.088 "adrfam": "IPv4", 00:21:22.088 "traddr": "10.0.0.1", 00:21:22.088 "trsvcid": "33960" 00:21:22.088 }, 00:21:22.088 "auth": { 00:21:22.088 "state": "completed", 00:21:22.088 "digest": "sha384", 00:21:22.088 "dhgroup": "ffdhe6144" 00:21:22.088 } 00:21:22.088 } 00:21:22.088 ]' 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.088 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.346 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.346 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.346 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.346 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.346 01:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.605 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:22.605 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.540 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.798 01:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.365 00:21:24.365 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.365 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.365 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.623 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.624 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.624 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.624 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.624 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.624 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.624 { 00:21:24.624 "cntlid": 85, 00:21:24.624 "qid": 0, 00:21:24.624 "state": "enabled", 00:21:24.624 "thread": "nvmf_tgt_poll_group_000", 00:21:24.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.624 "listen_address": { 00:21:24.624 "trtype": "TCP", 00:21:24.624 "adrfam": "IPv4", 00:21:24.624 "traddr": "10.0.0.2", 00:21:24.624 "trsvcid": "4420" 00:21:24.624 }, 00:21:24.624 "peer_address": { 00:21:24.624 "trtype": "TCP", 00:21:24.624 "adrfam": "IPv4", 00:21:24.624 "traddr": "10.0.0.1", 00:21:24.624 "trsvcid": "33968" 00:21:24.624 }, 00:21:24.624 "auth": { 00:21:24.624 "state": "completed", 00:21:24.624 "digest": "sha384", 00:21:24.624 "dhgroup": "ffdhe6144" 00:21:24.624 } 00:21:24.624 } 00:21:24.624 ]' 00:21:24.624 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.881 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.882 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.882 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.882 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.882 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.882 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.882 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.139 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:25.139 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.072 01:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.330 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.331 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.331 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.331 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.331 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.331 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.331 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.896 00:21:26.896 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.896 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.896 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.154 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.155 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.155 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.155 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.413 { 00:21:27.413 "cntlid": 87, 00:21:27.413 "qid": 0, 00:21:27.413 "state": "enabled", 00:21:27.413 "thread": "nvmf_tgt_poll_group_000", 00:21:27.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.413 "listen_address": { 00:21:27.413 "trtype": "TCP", 00:21:27.413 "adrfam": "IPv4", 00:21:27.413 "traddr": "10.0.0.2", 00:21:27.413 "trsvcid": "4420" 00:21:27.413 }, 00:21:27.413 "peer_address": { 00:21:27.413 "trtype": "TCP", 00:21:27.413 "adrfam": "IPv4", 00:21:27.413 "traddr": "10.0.0.1", 00:21:27.413 "trsvcid": "33998" 00:21:27.413 }, 00:21:27.413 "auth": { 00:21:27.413 "state": "completed", 00:21:27.413 "digest": "sha384", 00:21:27.413 "dhgroup": "ffdhe6144" 00:21:27.413 } 00:21:27.413 } 00:21:27.413 ]' 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.413 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.671 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:27.671 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.604 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.863 01:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.796 00:21:29.796 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.796 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.796 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.054 { 00:21:30.054 "cntlid": 89, 00:21:30.054 "qid": 0, 00:21:30.054 "state": "enabled", 00:21:30.054 "thread": "nvmf_tgt_poll_group_000", 00:21:30.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.054 "listen_address": { 00:21:30.054 "trtype": "TCP", 00:21:30.054 "adrfam": "IPv4", 00:21:30.054 "traddr": "10.0.0.2", 00:21:30.054 "trsvcid": "4420" 00:21:30.054 }, 00:21:30.054 "peer_address": { 00:21:30.054 "trtype": "TCP", 00:21:30.054 "adrfam": "IPv4", 00:21:30.054 "traddr": "10.0.0.1", 00:21:30.054 "trsvcid": "34024" 00:21:30.054 }, 00:21:30.054 "auth": { 00:21:30.054 "state": "completed", 00:21:30.054 "digest": "sha384", 00:21:30.054 "dhgroup": "ffdhe8192" 00:21:30.054 } 00:21:30.054 } 00:21:30.054 ]' 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.054 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.313 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.313 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.313 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.313 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.313 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.313 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.571 01:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:30.571 01:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.505 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.768 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.781 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.781 { 00:21:32.781 "cntlid": 91, 00:21:32.781 "qid": 0, 00:21:32.781 "state": "enabled", 00:21:32.781 "thread": "nvmf_tgt_poll_group_000", 00:21:32.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.781 "listen_address": { 00:21:32.781 "trtype": "TCP", 00:21:32.781 "adrfam": "IPv4", 00:21:32.781 "traddr": "10.0.0.2", 00:21:32.781 "trsvcid": "4420" 00:21:32.781 }, 00:21:32.781 "peer_address": { 00:21:32.781 "trtype": "TCP", 00:21:32.781 "adrfam": "IPv4", 00:21:32.781 "traddr": "10.0.0.1", 00:21:32.781 "trsvcid": "40626" 00:21:32.781 }, 00:21:32.781 "auth": { 00:21:32.781 "state": "completed", 00:21:32.781 "digest": "sha384", 00:21:32.781 "dhgroup": "ffdhe8192" 00:21:32.781 } 00:21:32.781 } 00:21:32.781 ]' 00:21:32.781 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.039 01:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.297 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:33.297 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.232 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.490 01:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.423 00:21:35.423 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.423 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.423 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.681 { 00:21:35.681 "cntlid": 93, 00:21:35.681 "qid": 0, 00:21:35.681 "state": "enabled", 00:21:35.681 "thread": "nvmf_tgt_poll_group_000", 00:21:35.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.681 "listen_address": { 00:21:35.681 "trtype": "TCP", 00:21:35.681 "adrfam": "IPv4", 00:21:35.681 "traddr": "10.0.0.2", 00:21:35.681 "trsvcid": "4420" 00:21:35.681 }, 00:21:35.681 "peer_address": { 00:21:35.681 "trtype": "TCP", 00:21:35.681 "adrfam": "IPv4", 00:21:35.681 "traddr": "10.0.0.1", 00:21:35.681 "trsvcid": "40648" 00:21:35.681 }, 00:21:35.681 "auth": { 00:21:35.681 "state": "completed", 00:21:35.681 "digest": "sha384", 00:21:35.681 "dhgroup": "ffdhe8192" 00:21:35.681 } 00:21:35.681 } 00:21:35.681 ]' 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.681 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.938 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.938 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.938 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.938 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.938 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.195 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:36.195 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:37.130 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.130 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.130 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.130 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.130 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.131 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.131 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.131 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.388 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.320 00:21:38.320 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.320 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.320 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.578 { 00:21:38.578 "cntlid": 95, 00:21:38.578 "qid": 0, 00:21:38.578 "state": "enabled", 00:21:38.578 "thread": "nvmf_tgt_poll_group_000", 00:21:38.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.578 "listen_address": { 00:21:38.578 "trtype": "TCP", 00:21:38.578 "adrfam": "IPv4", 00:21:38.578 "traddr": "10.0.0.2", 00:21:38.578 "trsvcid": "4420" 00:21:38.578 }, 00:21:38.578 "peer_address": { 00:21:38.578 "trtype": "TCP", 00:21:38.578 "adrfam": "IPv4", 00:21:38.578 "traddr": "10.0.0.1", 00:21:38.578 "trsvcid": "40666" 00:21:38.578 }, 00:21:38.578 "auth": { 00:21:38.578 "state": "completed", 00:21:38.578 "digest": "sha384", 00:21:38.578 "dhgroup": "ffdhe8192" 00:21:38.578 } 00:21:38.578 } 00:21:38.578 ]' 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.578 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.834 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.834 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.834 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.834 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.834 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.090 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:39.090 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:40.023 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.023 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.024 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.281 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.282 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.539 00:21:40.539 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.539 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.539 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.105 { 00:21:41.105 "cntlid": 97, 00:21:41.105 "qid": 0, 00:21:41.105 "state": "enabled", 00:21:41.105 "thread": "nvmf_tgt_poll_group_000", 00:21:41.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.105 "listen_address": { 00:21:41.105 "trtype": "TCP", 00:21:41.105 "adrfam": "IPv4", 00:21:41.105 "traddr": "10.0.0.2", 00:21:41.105 "trsvcid": "4420" 00:21:41.105 }, 00:21:41.105 "peer_address": { 00:21:41.105 "trtype": "TCP", 00:21:41.105 "adrfam": "IPv4", 00:21:41.105 "traddr": "10.0.0.1", 00:21:41.105 "trsvcid": "40686" 00:21:41.105 }, 00:21:41.105 "auth": { 00:21:41.105 "state": "completed", 00:21:41.105 "digest": "sha512", 00:21:41.105 "dhgroup": "null" 00:21:41.105 } 00:21:41.105 } 00:21:41.105 ]' 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.105 01:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.364 01:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:41.364 01:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.299 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.558 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.124 00:21:43.124 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.124 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.124 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.382 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.382 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.382 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.382 01:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.382 { 00:21:43.382 "cntlid": 99, 00:21:43.382 "qid": 0, 00:21:43.382 "state": "enabled", 00:21:43.382 "thread": "nvmf_tgt_poll_group_000", 00:21:43.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.382 "listen_address": { 00:21:43.382 "trtype": "TCP", 00:21:43.382 "adrfam": "IPv4", 00:21:43.382 "traddr": "10.0.0.2", 00:21:43.382 "trsvcid": "4420" 00:21:43.382 }, 00:21:43.382 "peer_address": { 00:21:43.382 "trtype": "TCP", 00:21:43.382 "adrfam": "IPv4", 00:21:43.382 "traddr": "10.0.0.1", 00:21:43.382 "trsvcid": "36616" 00:21:43.382 }, 00:21:43.382 "auth": { 00:21:43.382 "state": "completed", 00:21:43.382 "digest": "sha512", 00:21:43.382 "dhgroup": "null" 00:21:43.382 } 00:21:43.382 } 00:21:43.382 ]' 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.382 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.640 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:43.640 01:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:44.573 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.573 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.573 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.573 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.574 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.574 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.574 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.574 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.832 01:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.397 00:21:45.397 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.397 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.397 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.656 { 00:21:45.656 "cntlid": 101, 00:21:45.656 "qid": 0, 00:21:45.656 "state": "enabled", 00:21:45.656 "thread": "nvmf_tgt_poll_group_000", 00:21:45.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.656 "listen_address": { 00:21:45.656 "trtype": "TCP", 00:21:45.656 "adrfam": "IPv4", 00:21:45.656 "traddr": "10.0.0.2", 00:21:45.656 "trsvcid": "4420" 00:21:45.656 }, 00:21:45.656 "peer_address": { 00:21:45.656 "trtype": "TCP", 00:21:45.656 "adrfam": "IPv4", 00:21:45.656 "traddr": "10.0.0.1", 00:21:45.656 "trsvcid": "36652" 00:21:45.656 }, 00:21:45.656 "auth": { 00:21:45.656 "state": "completed", 00:21:45.656 "digest": "sha512", 00:21:45.656 "dhgroup": "null" 00:21:45.656 } 00:21:45.656 } 00:21:45.656 ]' 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.656 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.913 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:45.913 01:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.843 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.101 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.359 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.360 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.618 00:21:47.618 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.618 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.618 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.876 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.876 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.876 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.876 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.876 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.876 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.876 { 00:21:47.876 "cntlid": 103, 00:21:47.876 "qid": 0, 00:21:47.876 "state": "enabled", 00:21:47.876 "thread": "nvmf_tgt_poll_group_000", 00:21:47.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.876 "listen_address": { 00:21:47.876 "trtype": "TCP", 00:21:47.876 "adrfam": "IPv4", 00:21:47.876 "traddr": "10.0.0.2", 00:21:47.876 "trsvcid": "4420" 00:21:47.876 }, 00:21:47.876 "peer_address": { 00:21:47.876 "trtype": "TCP", 00:21:47.877 "adrfam": "IPv4", 00:21:47.877 "traddr": "10.0.0.1", 00:21:47.877 "trsvcid": "36692" 00:21:47.877 }, 00:21:47.877 "auth": { 00:21:47.877 "state": "completed", 00:21:47.877 "digest": "sha512", 00:21:47.877 "dhgroup": "null" 00:21:47.877 } 00:21:47.877 } 00:21:47.877 ]' 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.877 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.136 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:48.136 01:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.071 01:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.636 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.894 00:21:49.894 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.894 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.894 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.152 { 00:21:50.152 "cntlid": 105, 00:21:50.152 "qid": 0, 00:21:50.152 "state": "enabled", 00:21:50.152 "thread": "nvmf_tgt_poll_group_000", 00:21:50.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.152 "listen_address": { 00:21:50.152 "trtype": "TCP", 00:21:50.152 "adrfam": "IPv4", 00:21:50.152 "traddr": "10.0.0.2", 00:21:50.152 "trsvcid": "4420" 00:21:50.152 }, 00:21:50.152 "peer_address": { 00:21:50.152 "trtype": "TCP", 00:21:50.152 "adrfam": "IPv4", 00:21:50.152 "traddr": "10.0.0.1", 00:21:50.152 "trsvcid": "36718" 00:21:50.152 }, 00:21:50.152 "auth": { 00:21:50.152 "state": "completed", 00:21:50.152 "digest": "sha512", 00:21:50.152 "dhgroup": "ffdhe2048" 00:21:50.152 } 00:21:50.152 } 00:21:50.152 ]' 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.152 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.410 01:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:50.410 01:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.344 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.910 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.911 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.911 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.911 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.911 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.169 00:21:52.169 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.169 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.169 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.427 { 00:21:52.427 "cntlid": 107, 00:21:52.427 "qid": 0, 00:21:52.427 "state": "enabled", 00:21:52.427 "thread": "nvmf_tgt_poll_group_000", 00:21:52.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.427 "listen_address": { 00:21:52.427 "trtype": "TCP", 00:21:52.427 "adrfam": "IPv4", 00:21:52.427 "traddr": "10.0.0.2", 00:21:52.427 "trsvcid": "4420" 00:21:52.427 }, 00:21:52.427 "peer_address": { 00:21:52.427 "trtype": "TCP", 00:21:52.427 "adrfam": "IPv4", 00:21:52.427 "traddr": "10.0.0.1", 00:21:52.427 "trsvcid": "57932" 00:21:52.427 }, 00:21:52.427 "auth": { 00:21:52.427 "state": "completed", 00:21:52.427 "digest": "sha512", 00:21:52.427 "dhgroup": "ffdhe2048" 00:21:52.427 } 00:21:52.427 } 00:21:52.427 ]' 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.427 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.428 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.428 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.428 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.686 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:52.686 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.061 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.627 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.627 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.628 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.628 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.628 { 00:21:54.628 "cntlid": 109, 00:21:54.628 "qid": 0, 00:21:54.628 "state": "enabled", 00:21:54.628 "thread": "nvmf_tgt_poll_group_000", 00:21:54.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.628 "listen_address": { 00:21:54.628 "trtype": "TCP", 00:21:54.628 "adrfam": "IPv4", 00:21:54.628 "traddr": "10.0.0.2", 00:21:54.628 "trsvcid": "4420" 00:21:54.628 }, 00:21:54.628 "peer_address": { 00:21:54.628 "trtype": "TCP", 00:21:54.628 "adrfam": "IPv4", 00:21:54.628 "traddr": "10.0.0.1", 00:21:54.628 "trsvcid": "57954" 00:21:54.628 }, 00:21:54.628 "auth": { 00:21:54.628 "state": "completed", 00:21:54.628 "digest": "sha512", 00:21:54.628 "dhgroup": "ffdhe2048" 00:21:54.628 } 00:21:54.628 } 00:21:54.628 ]' 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.886 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.144 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:55.144 01:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.078 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.645 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.905 00:21:56.905 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.905 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.905 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.163 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.163 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.163 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.163 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.163 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.163 { 00:21:57.163 "cntlid": 111, 00:21:57.164 "qid": 0, 00:21:57.164 "state": "enabled", 00:21:57.164 "thread": "nvmf_tgt_poll_group_000", 00:21:57.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.164 "listen_address": { 00:21:57.164 "trtype": "TCP", 00:21:57.164 "adrfam": "IPv4", 00:21:57.164 "traddr": "10.0.0.2", 00:21:57.164 "trsvcid": "4420" 00:21:57.164 }, 00:21:57.164 "peer_address": { 00:21:57.164 "trtype": "TCP", 00:21:57.164 "adrfam": "IPv4", 00:21:57.164 "traddr": "10.0.0.1", 00:21:57.164 "trsvcid": "57984" 00:21:57.164 }, 00:21:57.164 "auth": { 00:21:57.164 "state": "completed", 00:21:57.164 "digest": "sha512", 00:21:57.164 "dhgroup": "ffdhe2048" 00:21:57.164 } 00:21:57.164 } 00:21:57.164 ]' 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.164 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.424 01:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:57.424 01:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.362 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.929 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.930 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.930 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.188 00:21:59.188 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.188 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.188 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.445 { 00:21:59.445 "cntlid": 113, 00:21:59.445 "qid": 0, 00:21:59.445 "state": "enabled", 00:21:59.445 "thread": "nvmf_tgt_poll_group_000", 00:21:59.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.445 "listen_address": { 00:21:59.445 "trtype": "TCP", 00:21:59.445 "adrfam": "IPv4", 00:21:59.445 "traddr": "10.0.0.2", 00:21:59.445 "trsvcid": "4420" 00:21:59.445 }, 00:21:59.445 "peer_address": { 00:21:59.445 "trtype": "TCP", 00:21:59.445 "adrfam": "IPv4", 00:21:59.445 "traddr": "10.0.0.1", 00:21:59.445 "trsvcid": "58010" 00:21:59.445 }, 00:21:59.445 "auth": { 00:21:59.445 "state": "completed", 00:21:59.445 "digest": "sha512", 00:21:59.445 "dhgroup": "ffdhe3072" 00:21:59.445 } 00:21:59.445 } 00:21:59.445 ]' 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.445 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.703 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.703 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.703 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.962 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:21:59.962 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.900 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.158 01:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.726 00:22:01.726 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.726 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.726 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.984 { 00:22:01.984 "cntlid": 115, 00:22:01.984 "qid": 0, 00:22:01.984 "state": "enabled", 00:22:01.984 "thread": "nvmf_tgt_poll_group_000", 00:22:01.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.984 "listen_address": { 00:22:01.984 "trtype": "TCP", 00:22:01.984 "adrfam": "IPv4", 00:22:01.984 "traddr": "10.0.0.2", 00:22:01.984 "trsvcid": "4420" 00:22:01.984 }, 00:22:01.984 "peer_address": { 00:22:01.984 "trtype": "TCP", 00:22:01.984 "adrfam": "IPv4", 00:22:01.984 "traddr": "10.0.0.1", 00:22:01.984 "trsvcid": "56890" 00:22:01.984 }, 00:22:01.984 "auth": { 00:22:01.984 "state": "completed", 00:22:01.984 "digest": "sha512", 00:22:01.984 "dhgroup": "ffdhe3072" 00:22:01.984 } 00:22:01.984 } 00:22:01.984 ]' 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.984 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.985 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.985 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.985 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.985 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.985 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.245 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:02.245 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:03.237 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.237 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.494 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.058 00:22:04.058 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.058 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.058 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.316 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.316 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.316 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.316 01:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.316 { 00:22:04.316 "cntlid": 117, 00:22:04.316 "qid": 0, 00:22:04.316 "state": "enabled", 00:22:04.316 "thread": "nvmf_tgt_poll_group_000", 00:22:04.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.316 "listen_address": { 00:22:04.316 "trtype": "TCP", 00:22:04.316 "adrfam": "IPv4", 00:22:04.316 "traddr": "10.0.0.2", 00:22:04.316 "trsvcid": "4420" 00:22:04.316 }, 00:22:04.316 "peer_address": { 00:22:04.316 "trtype": "TCP", 00:22:04.316 "adrfam": "IPv4", 00:22:04.316 "traddr": "10.0.0.1", 00:22:04.316 "trsvcid": "56912" 00:22:04.316 }, 00:22:04.316 "auth": { 00:22:04.316 "state": "completed", 00:22:04.316 "digest": "sha512", 00:22:04.316 "dhgroup": "ffdhe3072" 00:22:04.316 } 00:22:04.316 } 00:22:04.316 ]' 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.316 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.574 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:04.574 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:05.507 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:05.768 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.027 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.285 00:22:06.285 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.285 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.285 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.544 { 00:22:06.544 "cntlid": 119, 00:22:06.544 "qid": 0, 00:22:06.544 "state": "enabled", 00:22:06.544 "thread": "nvmf_tgt_poll_group_000", 00:22:06.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.544 "listen_address": { 00:22:06.544 "trtype": "TCP", 00:22:06.544 "adrfam": "IPv4", 00:22:06.544 "traddr": "10.0.0.2", 00:22:06.544 "trsvcid": "4420" 00:22:06.544 }, 00:22:06.544 "peer_address": { 00:22:06.544 "trtype": "TCP", 00:22:06.544 "adrfam": "IPv4", 00:22:06.544 "traddr": "10.0.0.1", 00:22:06.544 "trsvcid": "56946" 00:22:06.544 }, 00:22:06.544 "auth": { 00:22:06.544 "state": "completed", 00:22:06.544 "digest": "sha512", 00:22:06.544 "dhgroup": "ffdhe3072" 00:22:06.544 } 00:22:06.544 } 00:22:06.544 ]' 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.544 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.802 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.802 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.802 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.802 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.802 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.060 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:07.060 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.998 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.256 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.826 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.826 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.085 { 00:22:09.085 "cntlid": 121, 00:22:09.085 "qid": 0, 00:22:09.085 "state": "enabled", 00:22:09.085 "thread": "nvmf_tgt_poll_group_000", 00:22:09.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.085 "listen_address": { 00:22:09.085 "trtype": "TCP", 00:22:09.085 "adrfam": "IPv4", 00:22:09.085 "traddr": "10.0.0.2", 00:22:09.085 "trsvcid": "4420" 00:22:09.085 }, 00:22:09.085 "peer_address": { 00:22:09.085 "trtype": "TCP", 00:22:09.085 "adrfam": "IPv4", 00:22:09.085 "traddr": "10.0.0.1", 00:22:09.085 "trsvcid": "56982" 00:22:09.085 }, 00:22:09.085 "auth": { 00:22:09.085 "state": "completed", 00:22:09.085 "digest": "sha512", 00:22:09.085 "dhgroup": "ffdhe4096" 00:22:09.085 } 00:22:09.085 } 00:22:09.085 ]' 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.085 01:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.350 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:09.350 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:10.285 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.285 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.285 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.285 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.286 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.286 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.286 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.286 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.544 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.111 00:22:11.111 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.111 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.111 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.369 { 00:22:11.369 "cntlid": 123, 00:22:11.369 "qid": 0, 00:22:11.369 "state": "enabled", 00:22:11.369 "thread": "nvmf_tgt_poll_group_000", 00:22:11.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.369 "listen_address": { 00:22:11.369 "trtype": "TCP", 00:22:11.369 "adrfam": "IPv4", 00:22:11.369 "traddr": "10.0.0.2", 00:22:11.369 "trsvcid": "4420" 00:22:11.369 }, 00:22:11.369 "peer_address": { 00:22:11.369 "trtype": "TCP", 00:22:11.369 "adrfam": "IPv4", 00:22:11.369 "traddr": "10.0.0.1", 00:22:11.369 "trsvcid": "43792" 00:22:11.369 }, 00:22:11.369 "auth": { 00:22:11.369 "state": "completed", 00:22:11.369 "digest": "sha512", 00:22:11.369 "dhgroup": "ffdhe4096" 00:22:11.369 } 00:22:11.369 } 00:22:11.369 ]' 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:11.369 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.627 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.627 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.627 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.886 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:11.887 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:12.824 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.082 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.651 00:22:13.651 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.651 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.651 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.909 { 00:22:13.909 "cntlid": 125, 00:22:13.909 "qid": 0, 00:22:13.909 "state": "enabled", 00:22:13.909 "thread": "nvmf_tgt_poll_group_000", 00:22:13.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.909 "listen_address": { 00:22:13.909 "trtype": "TCP", 00:22:13.909 "adrfam": "IPv4", 00:22:13.909 "traddr": "10.0.0.2", 00:22:13.909 "trsvcid": "4420" 00:22:13.909 }, 00:22:13.909 "peer_address": { 00:22:13.909 "trtype": "TCP", 00:22:13.909 "adrfam": "IPv4", 00:22:13.909 "traddr": "10.0.0.1", 00:22:13.909 "trsvcid": "43826" 00:22:13.909 }, 00:22:13.909 "auth": { 00:22:13.909 "state": "completed", 00:22:13.909 "digest": "sha512", 00:22:13.909 "dhgroup": "ffdhe4096" 00:22:13.909 } 00:22:13.909 } 00:22:13.909 ]' 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.909 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.910 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.910 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.169 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:14.169 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:15.104 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.363 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.622 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.879 00:22:15.880 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.880 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.880 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.138 { 00:22:16.138 "cntlid": 127, 00:22:16.138 "qid": 0, 00:22:16.138 "state": "enabled", 00:22:16.138 "thread": "nvmf_tgt_poll_group_000", 00:22:16.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.138 "listen_address": { 00:22:16.138 "trtype": "TCP", 00:22:16.138 "adrfam": "IPv4", 00:22:16.138 "traddr": "10.0.0.2", 00:22:16.138 "trsvcid": "4420" 00:22:16.138 }, 00:22:16.138 "peer_address": { 00:22:16.138 "trtype": "TCP", 00:22:16.138 "adrfam": "IPv4", 00:22:16.138 "traddr": "10.0.0.1", 00:22:16.138 "trsvcid": "43874" 00:22:16.138 }, 00:22:16.138 "auth": { 00:22:16.138 "state": "completed", 00:22:16.138 "digest": "sha512", 00:22:16.138 "dhgroup": "ffdhe4096" 00:22:16.138 } 00:22:16.138 } 00:22:16.138 ]' 00:22:16.138 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.397 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.397 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.397 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.397 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.397 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.397 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.397 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.656 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:16.656 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.596 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.854 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.855 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.855 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.855 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.855 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.855 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.421 00:22:18.421 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.421 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.421 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.680 { 00:22:18.680 "cntlid": 129, 00:22:18.680 "qid": 0, 00:22:18.680 "state": "enabled", 00:22:18.680 "thread": "nvmf_tgt_poll_group_000", 00:22:18.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.680 "listen_address": { 00:22:18.680 "trtype": "TCP", 00:22:18.680 "adrfam": "IPv4", 00:22:18.680 "traddr": "10.0.0.2", 00:22:18.680 "trsvcid": "4420" 00:22:18.680 }, 00:22:18.680 "peer_address": { 00:22:18.680 "trtype": "TCP", 00:22:18.680 "adrfam": "IPv4", 00:22:18.680 "traddr": "10.0.0.1", 00:22:18.680 "trsvcid": "43908" 00:22:18.680 }, 00:22:18.680 "auth": { 00:22:18.680 "state": "completed", 00:22:18.680 "digest": "sha512", 00:22:18.680 "dhgroup": "ffdhe6144" 00:22:18.680 } 00:22:18.680 } 00:22:18.680 ]' 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.680 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.939 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.939 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.939 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.199 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:19.199 01:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.135 01:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.393 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.962 00:22:20.962 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.962 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.962 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.220 { 00:22:21.220 "cntlid": 131, 00:22:21.220 "qid": 0, 00:22:21.220 "state": "enabled", 00:22:21.220 "thread": "nvmf_tgt_poll_group_000", 00:22:21.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.220 "listen_address": { 00:22:21.220 "trtype": "TCP", 00:22:21.220 "adrfam": "IPv4", 00:22:21.220 "traddr": "10.0.0.2", 00:22:21.220 "trsvcid": "4420" 00:22:21.220 }, 00:22:21.220 "peer_address": { 00:22:21.220 "trtype": "TCP", 00:22:21.220 "adrfam": "IPv4", 00:22:21.220 "traddr": "10.0.0.1", 00:22:21.220 "trsvcid": "43936" 00:22:21.220 }, 00:22:21.220 "auth": { 00:22:21.220 "state": "completed", 00:22:21.220 "digest": "sha512", 00:22:21.220 "dhgroup": "ffdhe6144" 00:22:21.220 } 00:22:21.220 } 00:22:21.220 ]' 00:22:21.220 01:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.220 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.220 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.220 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.220 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.479 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.479 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.479 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.735 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:21.735 01:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.670 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.927 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.493 00:22:23.493 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.493 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.493 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.750 { 00:22:23.750 "cntlid": 133, 00:22:23.750 "qid": 0, 00:22:23.750 "state": "enabled", 00:22:23.750 "thread": "nvmf_tgt_poll_group_000", 00:22:23.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.750 "listen_address": { 00:22:23.750 "trtype": "TCP", 00:22:23.750 "adrfam": "IPv4", 00:22:23.750 "traddr": "10.0.0.2", 00:22:23.750 "trsvcid": "4420" 00:22:23.750 }, 00:22:23.750 "peer_address": { 00:22:23.750 "trtype": "TCP", 00:22:23.750 "adrfam": "IPv4", 00:22:23.750 "traddr": "10.0.0.1", 00:22:23.750 "trsvcid": "50126" 00:22:23.750 }, 00:22:23.750 "auth": { 00:22:23.750 "state": "completed", 00:22:23.750 "digest": "sha512", 00:22:23.750 "dhgroup": "ffdhe6144" 00:22:23.750 } 00:22:23.750 } 00:22:23.750 ]' 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.750 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.007 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.007 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.007 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.266 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:24.266 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.199 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.456 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.457 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.457 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.457 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.457 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.022 00:22:26.022 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.022 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.022 01:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.280 { 00:22:26.280 "cntlid": 135, 00:22:26.280 "qid": 0, 00:22:26.280 "state": "enabled", 00:22:26.280 "thread": "nvmf_tgt_poll_group_000", 00:22:26.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.280 "listen_address": { 00:22:26.280 "trtype": "TCP", 00:22:26.280 "adrfam": "IPv4", 00:22:26.280 "traddr": "10.0.0.2", 00:22:26.280 "trsvcid": "4420" 00:22:26.280 }, 00:22:26.280 "peer_address": { 00:22:26.280 "trtype": "TCP", 00:22:26.280 "adrfam": "IPv4", 00:22:26.280 "traddr": "10.0.0.1", 00:22:26.280 "trsvcid": "50152" 00:22:26.280 }, 00:22:26.280 "auth": { 00:22:26.280 "state": "completed", 00:22:26.280 "digest": "sha512", 00:22:26.280 "dhgroup": "ffdhe6144" 00:22:26.280 } 00:22:26.280 } 00:22:26.280 ]' 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.280 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.847 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:26.847 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.785 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.043 01:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.979 00:22:28.979 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.979 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.979 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.237 { 00:22:29.237 "cntlid": 137, 00:22:29.237 "qid": 0, 00:22:29.237 "state": "enabled", 00:22:29.237 "thread": "nvmf_tgt_poll_group_000", 00:22:29.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.237 "listen_address": { 00:22:29.237 "trtype": "TCP", 00:22:29.237 "adrfam": "IPv4", 00:22:29.237 "traddr": "10.0.0.2", 00:22:29.237 "trsvcid": "4420" 00:22:29.237 }, 00:22:29.237 "peer_address": { 00:22:29.237 "trtype": "TCP", 00:22:29.237 "adrfam": "IPv4", 00:22:29.237 "traddr": "10.0.0.1", 00:22:29.237 "trsvcid": "50200" 00:22:29.237 }, 00:22:29.237 "auth": { 00:22:29.237 "state": "completed", 00:22:29.237 "digest": "sha512", 00:22:29.237 "dhgroup": "ffdhe8192" 00:22:29.237 } 00:22:29.237 } 00:22:29.237 ]' 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.237 01:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.237 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:29.237 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.237 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.237 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.237 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.496 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:29.496 01:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.871 01:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.810 00:22:31.810 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.810 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.810 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.069 { 00:22:32.069 "cntlid": 139, 00:22:32.069 "qid": 0, 00:22:32.069 "state": "enabled", 00:22:32.069 "thread": "nvmf_tgt_poll_group_000", 00:22:32.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.069 "listen_address": { 00:22:32.069 "trtype": "TCP", 00:22:32.069 "adrfam": "IPv4", 00:22:32.069 "traddr": "10.0.0.2", 00:22:32.069 "trsvcid": "4420" 00:22:32.069 }, 00:22:32.069 "peer_address": { 00:22:32.069 "trtype": "TCP", 00:22:32.069 "adrfam": "IPv4", 00:22:32.069 "traddr": "10.0.0.1", 00:22:32.069 "trsvcid": "56566" 00:22:32.069 }, 00:22:32.069 "auth": { 00:22:32.069 "state": "completed", 00:22:32.069 "digest": "sha512", 00:22:32.069 "dhgroup": "ffdhe8192" 00:22:32.069 } 00:22:32.069 } 00:22:32.069 ]' 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.069 01:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.330 01:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:32.330 01:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: --dhchap-ctrl-secret DHHC-1:02:YjUzMjVmZmQ5MDViODE0YzA1YjU4ZDE4ZWNjOWU4NDc5NTE1NTY0NzgyOGE4YTQ3KKxLpg==: 00:22:33.316 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.575 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.834 01:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.774 00:22:34.774 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.774 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.774 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.032 { 00:22:35.032 "cntlid": 141, 00:22:35.032 "qid": 0, 00:22:35.032 "state": "enabled", 00:22:35.032 "thread": "nvmf_tgt_poll_group_000", 00:22:35.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.032 "listen_address": { 00:22:35.032 "trtype": "TCP", 00:22:35.032 "adrfam": "IPv4", 00:22:35.032 "traddr": "10.0.0.2", 00:22:35.032 "trsvcid": "4420" 00:22:35.032 }, 00:22:35.032 "peer_address": { 00:22:35.032 "trtype": "TCP", 00:22:35.032 "adrfam": "IPv4", 00:22:35.032 "traddr": "10.0.0.1", 00:22:35.032 "trsvcid": "56594" 00:22:35.032 }, 00:22:35.032 "auth": { 00:22:35.032 "state": "completed", 00:22:35.032 "digest": "sha512", 00:22:35.032 "dhgroup": "ffdhe8192" 00:22:35.032 } 00:22:35.032 } 00:22:35.032 ]' 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.032 01:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.290 01:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:35.290 01:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:01:M2EzMjkwZjIwOTNhZmUwMzU3NzZjYThiZDA1ZjczMzRmjjc1: 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.227 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.485 01:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.420 00:22:37.420 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.420 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.420 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.678 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.678 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.678 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.678 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.678 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.678 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.678 { 00:22:37.678 "cntlid": 143, 00:22:37.678 "qid": 0, 00:22:37.678 "state": "enabled", 00:22:37.678 "thread": "nvmf_tgt_poll_group_000", 00:22:37.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.678 "listen_address": { 00:22:37.678 "trtype": "TCP", 00:22:37.678 "adrfam": "IPv4", 00:22:37.679 "traddr": "10.0.0.2", 00:22:37.679 "trsvcid": "4420" 00:22:37.679 }, 00:22:37.679 "peer_address": { 00:22:37.679 "trtype": "TCP", 00:22:37.679 "adrfam": "IPv4", 00:22:37.679 "traddr": "10.0.0.1", 00:22:37.679 "trsvcid": "56638" 00:22:37.679 }, 00:22:37.679 "auth": { 00:22:37.679 "state": "completed", 00:22:37.679 "digest": "sha512", 00:22:37.679 "dhgroup": "ffdhe8192" 00:22:37.679 } 00:22:37.679 } 00:22:37.679 ]' 00:22:37.679 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.679 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.937 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.937 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.937 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.937 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.937 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.937 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.195 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:38.195 01:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.132 01:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.389 01:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.330 00:22:40.330 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.330 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.330 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.588 { 00:22:40.588 "cntlid": 145, 00:22:40.588 "qid": 0, 00:22:40.588 "state": "enabled", 00:22:40.588 "thread": "nvmf_tgt_poll_group_000", 00:22:40.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.588 "listen_address": { 00:22:40.588 "trtype": "TCP", 00:22:40.588 "adrfam": "IPv4", 00:22:40.588 "traddr": "10.0.0.2", 00:22:40.588 "trsvcid": "4420" 00:22:40.588 }, 00:22:40.588 "peer_address": { 00:22:40.588 "trtype": "TCP", 00:22:40.588 "adrfam": "IPv4", 00:22:40.588 "traddr": "10.0.0.1", 00:22:40.588 "trsvcid": "56662" 00:22:40.588 }, 00:22:40.588 "auth": { 00:22:40.588 "state": "completed", 00:22:40.588 "digest": "sha512", 00:22:40.588 "dhgroup": "ffdhe8192" 00:22:40.588 } 00:22:40.588 } 00:22:40.588 ]' 00:22:40.588 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.589 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.589 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.589 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.589 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.848 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.848 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.848 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.106 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:41.106 01:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2JhNWNiZjkzYmI3NzM1MTA4MWRkMTAwYmJkMzA2MThjOTI5NGY4OGExM2E2MThhlEZdVA==: --dhchap-ctrl-secret DHHC-1:03:ODUzN2VlN2RkYmJkMWNiNGQ1Y2ExMmQwNjJkMjM2N2NmZjJmMDJkNDRhOGU4MmJkYTVjMDZlM2I3Y2EyNGNhY9VnmWk=: 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.042 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:42.043 01:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:42.979 request: 00:22:42.979 { 00:22:42.979 "name": "nvme0", 00:22:42.979 "trtype": "tcp", 00:22:42.979 "traddr": "10.0.0.2", 00:22:42.979 "adrfam": "ipv4", 00:22:42.979 "trsvcid": "4420", 00:22:42.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:42.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:42.979 "prchk_reftag": false, 00:22:42.979 "prchk_guard": false, 00:22:42.979 "hdgst": false, 00:22:42.979 "ddgst": false, 00:22:42.979 "dhchap_key": "key2", 00:22:42.979 "allow_unrecognized_csi": false, 00:22:42.979 "method": "bdev_nvme_attach_controller", 00:22:42.979 "req_id": 1 00:22:42.979 } 00:22:42.979 Got JSON-RPC error response 00:22:42.979 response: 00:22:42.979 { 00:22:42.979 "code": -5, 00:22:42.979 "message": "Input/output error" 00:22:42.979 } 00:22:42.979 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:42.980 01:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:43.919 request: 00:22:43.919 { 00:22:43.919 "name": "nvme0", 00:22:43.919 "trtype": "tcp", 00:22:43.919 "traddr": "10.0.0.2", 00:22:43.919 "adrfam": "ipv4", 00:22:43.919 "trsvcid": "4420", 00:22:43.919 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:43.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:43.919 "prchk_reftag": false, 00:22:43.919 "prchk_guard": false, 00:22:43.919 "hdgst": false, 00:22:43.919 "ddgst": false, 00:22:43.919 "dhchap_key": "key1", 00:22:43.919 "dhchap_ctrlr_key": "ckey2", 00:22:43.919 "allow_unrecognized_csi": false, 00:22:43.919 "method": "bdev_nvme_attach_controller", 00:22:43.919 "req_id": 1 00:22:43.919 } 00:22:43.919 Got JSON-RPC error response 00:22:43.919 response: 00:22:43.919 { 00:22:43.919 "code": -5, 00:22:43.919 "message": "Input/output error" 00:22:43.919 } 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.919 01:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.857 request: 00:22:44.857 { 00:22:44.857 "name": "nvme0", 00:22:44.857 "trtype": "tcp", 00:22:44.857 "traddr": "10.0.0.2", 00:22:44.857 "adrfam": "ipv4", 00:22:44.857 "trsvcid": "4420", 00:22:44.857 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:44.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.857 "prchk_reftag": false, 00:22:44.857 "prchk_guard": false, 00:22:44.857 "hdgst": false, 00:22:44.857 "ddgst": false, 00:22:44.857 "dhchap_key": "key1", 00:22:44.857 "dhchap_ctrlr_key": "ckey1", 00:22:44.857 "allow_unrecognized_csi": false, 00:22:44.857 "method": "bdev_nvme_attach_controller", 00:22:44.857 "req_id": 1 00:22:44.857 } 00:22:44.857 Got JSON-RPC error response 00:22:44.857 response: 00:22:44.857 { 00:22:44.857 "code": -5, 00:22:44.857 "message": "Input/output error" 00:22:44.857 } 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 903505 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 903505 ']' 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 903505 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 903505 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 903505' 00:22:44.857 killing process with pid 903505 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 903505 00:22:44.857 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 903505 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=926665 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 926665 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 926665 ']' 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.115 01:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 926665 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 926665 ']' 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.372 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 null0 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2J1 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.o0d ]] 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o0d 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tcH 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cgh ]] 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cgh 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.632 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iFZ 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kjD ]] 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kjD 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.nqD 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.891 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.892 01:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.267 nvme0n1 00:22:47.267 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.267 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.267 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.525 { 00:22:47.525 "cntlid": 1, 00:22:47.525 "qid": 0, 00:22:47.525 "state": "enabled", 00:22:47.525 "thread": "nvmf_tgt_poll_group_000", 00:22:47.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:47.525 "listen_address": { 00:22:47.525 "trtype": "TCP", 00:22:47.525 "adrfam": "IPv4", 00:22:47.525 "traddr": "10.0.0.2", 00:22:47.525 "trsvcid": "4420" 00:22:47.525 }, 00:22:47.525 "peer_address": { 00:22:47.525 "trtype": "TCP", 00:22:47.525 "adrfam": "IPv4", 00:22:47.525 "traddr": "10.0.0.1", 00:22:47.525 "trsvcid": "59098" 00:22:47.525 }, 00:22:47.525 "auth": { 00:22:47.525 "state": "completed", 00:22:47.525 "digest": "sha512", 00:22:47.525 "dhgroup": "ffdhe8192" 00:22:47.525 } 00:22:47.525 } 00:22:47.525 ]' 00:22:47.525 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.783 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.040 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:48.040 01:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:48.974 01:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.232 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.798 request: 00:22:49.798 { 00:22:49.798 "name": "nvme0", 00:22:49.798 "trtype": "tcp", 00:22:49.798 "traddr": "10.0.0.2", 00:22:49.798 "adrfam": "ipv4", 00:22:49.798 "trsvcid": "4420", 00:22:49.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:49.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:49.798 "prchk_reftag": false, 00:22:49.798 "prchk_guard": false, 00:22:49.798 "hdgst": false, 00:22:49.798 "ddgst": false, 00:22:49.798 "dhchap_key": "key3", 00:22:49.798 "allow_unrecognized_csi": false, 00:22:49.798 "method": "bdev_nvme_attach_controller", 00:22:49.798 "req_id": 1 00:22:49.798 } 00:22:49.798 Got JSON-RPC error response 00:22:49.798 response: 00:22:49.798 { 00:22:49.798 "code": -5, 00:22:49.798 "message": "Input/output error" 00:22:49.798 } 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.798 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.057 request: 00:22:50.057 { 00:22:50.057 "name": "nvme0", 00:22:50.057 "trtype": "tcp", 00:22:50.057 "traddr": "10.0.0.2", 00:22:50.057 "adrfam": "ipv4", 00:22:50.057 "trsvcid": "4420", 00:22:50.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:50.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:50.057 "prchk_reftag": false, 00:22:50.057 "prchk_guard": false, 00:22:50.057 "hdgst": false, 00:22:50.057 "ddgst": false, 00:22:50.057 "dhchap_key": "key3", 00:22:50.057 "allow_unrecognized_csi": false, 00:22:50.057 "method": "bdev_nvme_attach_controller", 00:22:50.057 "req_id": 1 00:22:50.057 } 00:22:50.057 Got JSON-RPC error response 00:22:50.057 response: 00:22:50.057 { 00:22:50.057 "code": -5, 00:22:50.057 "message": "Input/output error" 00:22:50.057 } 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.057 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.316 01:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:50.575 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.142 request: 00:22:51.142 { 00:22:51.142 "name": "nvme0", 00:22:51.142 "trtype": "tcp", 00:22:51.142 "traddr": "10.0.0.2", 00:22:51.142 "adrfam": "ipv4", 00:22:51.142 "trsvcid": "4420", 00:22:51.142 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:51.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:51.143 "prchk_reftag": false, 00:22:51.143 "prchk_guard": false, 00:22:51.143 "hdgst": false, 00:22:51.143 "ddgst": false, 00:22:51.143 "dhchap_key": "key0", 00:22:51.143 "dhchap_ctrlr_key": "key1", 00:22:51.143 "allow_unrecognized_csi": false, 00:22:51.143 "method": "bdev_nvme_attach_controller", 00:22:51.143 "req_id": 1 00:22:51.143 } 00:22:51.143 Got JSON-RPC error response 00:22:51.143 response: 00:22:51.143 { 00:22:51.143 "code": -5, 00:22:51.143 "message": "Input/output error" 00:22:51.143 } 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:51.143 01:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:51.401 nvme0n1 00:22:51.401 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:51.401 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:51.401 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.659 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.659 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.659 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:51.917 01:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:53.822 nvme0n1 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.822 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:54.081 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.081 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:54.081 01:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: --dhchap-ctrl-secret DHHC-1:03:YTFmNjA0MDRhZWE2YmQwYjIyNjUxYjQzNjQzZGNiNzM5YmI1ZWQ2NGY4OTI4OTRjZGFiYzU3ZGEyODFkMDNhOXJZ+dA=: 00:22:55.018 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:55.018 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:55.018 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:55.018 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:55.019 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:55.019 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:55.019 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:55.019 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.019 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:55.277 01:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:56.214 request: 00:22:56.214 { 00:22:56.214 "name": "nvme0", 00:22:56.214 "trtype": "tcp", 00:22:56.214 "traddr": "10.0.0.2", 00:22:56.214 "adrfam": "ipv4", 00:22:56.214 "trsvcid": "4420", 00:22:56.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:56.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:56.214 "prchk_reftag": false, 00:22:56.214 "prchk_guard": false, 00:22:56.214 "hdgst": false, 00:22:56.214 "ddgst": false, 00:22:56.214 "dhchap_key": "key1", 00:22:56.214 "allow_unrecognized_csi": false, 00:22:56.214 "method": "bdev_nvme_attach_controller", 00:22:56.214 "req_id": 1 00:22:56.214 } 00:22:56.214 Got JSON-RPC error response 00:22:56.214 response: 00:22:56.214 { 00:22:56.214 "code": -5, 00:22:56.214 "message": "Input/output error" 00:22:56.214 } 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.214 01:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:57.652 nvme0n1 00:22:57.652 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:57.652 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:57.652 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.909 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.909 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.909 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:58.166 01:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:58.424 nvme0n1 00:22:58.424 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:58.424 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:58.424 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.681 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.681 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.681 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: '' 2s 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: ]] 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzkxNTY4YjMzODE1MjZmOTA0ODI3OTEwOGM0YmYzNzL4SOyT: 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:58.938 01:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: 2s 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: ]] 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2QxYzU5YTVhYmJlN2VkMGI0NWUzMGE0YWZhN2ZhZTQ4M2YwMjhlODI3NGM5YmNmcbIrPA==: 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:01.472 01:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.378 01:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.759 nvme0n1 00:23:04.759 01:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:04.759 01:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.759 01:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.759 01:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.759 01:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:04.759 01:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:05.699 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:05.958 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:05.958 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.958 01:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:06.217 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:06.477 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.477 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:06.477 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.477 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:06.477 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:07.412 request: 00:23:07.412 { 00:23:07.412 "name": "nvme0", 00:23:07.412 "dhchap_key": "key1", 00:23:07.412 "dhchap_ctrlr_key": "key3", 00:23:07.412 "method": "bdev_nvme_set_keys", 00:23:07.412 "req_id": 1 00:23:07.412 } 00:23:07.412 Got JSON-RPC error response 00:23:07.412 response: 00:23:07.412 { 00:23:07.412 "code": -13, 00:23:07.412 "message": "Permission denied" 00:23:07.412 } 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:07.412 01:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.412 01:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:07.412 01:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:08.790 01:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:10.703 nvme0n1 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:10.703 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:11.272 request: 00:23:11.272 { 00:23:11.272 "name": "nvme0", 00:23:11.272 "dhchap_key": "key2", 00:23:11.272 "dhchap_ctrlr_key": "key0", 00:23:11.272 "method": "bdev_nvme_set_keys", 00:23:11.272 "req_id": 1 00:23:11.272 } 00:23:11.272 Got JSON-RPC error response 00:23:11.272 response: 00:23:11.272 { 00:23:11.272 "code": -13, 00:23:11.272 "message": "Permission denied" 00:23:11.272 } 00:23:11.272 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:11.272 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.272 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.272 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.273 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:11.273 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:11.273 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.532 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:11.532 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:12.471 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:12.471 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:12.471 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 903529 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 903529 ']' 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 903529 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.729 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 903529 00:23:12.988 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:12.988 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:12.988 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 903529' 00:23:12.988 killing process with pid 903529 00:23:12.988 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 903529 00:23:12.988 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 903529 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.248 rmmod nvme_tcp 00:23:13.248 rmmod nvme_fabrics 00:23:13.248 rmmod nvme_keyring 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 926665 ']' 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 926665 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 926665 ']' 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 926665 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.248 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926665 00:23:13.506 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.506 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.506 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926665' 00:23:13.506 killing process with pid 926665 00:23:13.506 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 926665 00:23:13.506 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 926665 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.766 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.2J1 /tmp/spdk.key-sha256.tcH /tmp/spdk.key-sha384.iFZ /tmp/spdk.key-sha512.nqD /tmp/spdk.key-sha512.o0d /tmp/spdk.key-sha384.cgh /tmp/spdk.key-sha256.kjD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:15.669 00:23:15.669 real 3m37.766s 00:23:15.669 user 8m28.821s 00:23:15.669 sys 0m27.311s 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.669 ************************************ 00:23:15.669 END TEST nvmf_auth_target 00:23:15.669 ************************************ 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:15.669 ************************************ 00:23:15.669 START TEST nvmf_bdevio_no_huge 00:23:15.669 ************************************ 00:23:15.669 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:15.929 * Looking for test storage... 00:23:15.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.929 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.930 --rc genhtml_branch_coverage=1 00:23:15.930 --rc genhtml_function_coverage=1 00:23:15.930 --rc genhtml_legend=1 00:23:15.930 --rc geninfo_all_blocks=1 00:23:15.930 --rc geninfo_unexecuted_blocks=1 00:23:15.930 00:23:15.930 ' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.930 --rc genhtml_branch_coverage=1 00:23:15.930 --rc genhtml_function_coverage=1 00:23:15.930 --rc genhtml_legend=1 00:23:15.930 --rc geninfo_all_blocks=1 00:23:15.930 --rc geninfo_unexecuted_blocks=1 00:23:15.930 00:23:15.930 ' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.930 --rc genhtml_branch_coverage=1 00:23:15.930 --rc genhtml_function_coverage=1 00:23:15.930 --rc genhtml_legend=1 00:23:15.930 --rc geninfo_all_blocks=1 00:23:15.930 --rc geninfo_unexecuted_blocks=1 00:23:15.930 00:23:15.930 ' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:15.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.930 --rc genhtml_branch_coverage=1 00:23:15.930 --rc genhtml_function_coverage=1 00:23:15.930 --rc genhtml_legend=1 00:23:15.930 --rc geninfo_all_blocks=1 00:23:15.930 --rc geninfo_unexecuted_blocks=1 00:23:15.930 00:23:15.930 ' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.930 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:18.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:18.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:18.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:18.463 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:18.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:23:18.464 00:23:18.464 --- 10.0.0.2 ping statistics --- 00:23:18.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.464 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:23:18.464 00:23:18.464 --- 10.0.0.1 ping statistics --- 00:23:18.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.464 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=932037 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 932037 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 932037 ']' 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.464 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.464 [2024-10-01 01:40:57.972316] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:18.464 [2024-10-01 01:40:57.972424] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:18.464 [2024-10-01 01:40:58.052470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.464 [2024-10-01 01:40:58.144947] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.464 [2024-10-01 01:40:58.145022] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.464 [2024-10-01 01:40:58.145051] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.464 [2024-10-01 01:40:58.145065] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.464 [2024-10-01 01:40:58.145077] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.464 [2024-10-01 01:40:58.145138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.464 [2024-10-01 01:40:58.145193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:23:18.464 [2024-10-01 01:40:58.145253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:23:18.464 [2024-10-01 01:40:58.145256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.464 [2024-10-01 01:40:58.302474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.464 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.725 Malloc0 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.725 [2024-10-01 01:40:58.340626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:18.725 { 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme$subsystem", 00:23:18.725 "trtype": "$TEST_TRANSPORT", 00:23:18.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "$NVMF_PORT", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.725 "hdgst": ${hdgst:-false}, 00:23:18.725 "ddgst": ${ddgst:-false} 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 } 00:23:18.725 EOF 00:23:18.725 )") 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:23:18.725 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:18.725 "params": { 00:23:18.725 "name": "Nvme1", 00:23:18.725 "trtype": "tcp", 00:23:18.725 "traddr": "10.0.0.2", 00:23:18.725 "adrfam": "ipv4", 00:23:18.725 "trsvcid": "4420", 00:23:18.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.725 "hdgst": false, 00:23:18.725 "ddgst": false 00:23:18.725 }, 00:23:18.725 "method": "bdev_nvme_attach_controller" 00:23:18.725 }' 00:23:18.725 [2024-10-01 01:40:58.388845] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:18.725 [2024-10-01 01:40:58.388930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid932176 ] 00:23:18.725 [2024-10-01 01:40:58.449231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:18.725 [2024-10-01 01:40:58.539573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.725 [2024-10-01 01:40:58.539624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.725 [2024-10-01 01:40:58.539627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.293 I/O targets: 00:23:19.293 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:19.293 00:23:19.293 00:23:19.293 CUnit - A unit testing framework for C - Version 2.1-3 00:23:19.293 http://cunit.sourceforge.net/ 00:23:19.293 00:23:19.293 00:23:19.293 Suite: bdevio tests on: Nvme1n1 00:23:19.293 Test: blockdev write read block ...passed 00:23:19.293 Test: blockdev write zeroes read block ...passed 00:23:19.293 Test: blockdev write zeroes read no split ...passed 00:23:19.293 Test: blockdev write zeroes read split ...passed 00:23:19.293 Test: blockdev write zeroes read split partial ...passed 00:23:19.293 Test: blockdev reset ...[2024-10-01 01:40:59.094511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.293 [2024-10-01 01:40:59.094655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6f700 (9): Bad file descriptor 00:23:19.293 [2024-10-01 01:40:59.115448] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:19.293 passed 00:23:19.551 Test: blockdev write read 8 blocks ...passed 00:23:19.551 Test: blockdev write read size > 128k ...passed 00:23:19.551 Test: blockdev write read invalid size ...passed 00:23:19.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:19.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:19.551 Test: blockdev write read max offset ...passed 00:23:19.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:19.551 Test: blockdev writev readv 8 blocks ...passed 00:23:19.551 Test: blockdev writev readv 30 x 1block ...passed 00:23:19.551 Test: blockdev writev readv block ...passed 00:23:19.551 Test: blockdev writev readv size > 128k ...passed 00:23:19.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:19.551 Test: blockdev comparev and writev ...[2024-10-01 01:40:59.328834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.328870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.328907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.328924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.329313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.329363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.329380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.329747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.329771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.329794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.329810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.330185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.330210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.551 [2024-10-01 01:40:59.330232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:19.551 [2024-10-01 01:40:59.330248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.551 passed 00:23:19.809 Test: blockdev nvme passthru rw ...passed 00:23:19.809 Test: blockdev nvme passthru vendor specific ...[2024-10-01 01:40:59.413299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:19.809 [2024-10-01 01:40:59.413327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.809 [2024-10-01 01:40:59.413493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:19.809 [2024-10-01 01:40:59.413516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.809 [2024-10-01 01:40:59.413675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:19.809 [2024-10-01 01:40:59.413697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.809 [2024-10-01 01:40:59.413856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:19.809 [2024-10-01 01:40:59.413886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.809 passed 00:23:19.809 Test: blockdev nvme admin passthru ...passed 00:23:19.809 Test: blockdev copy ...passed 00:23:19.809 00:23:19.809 Run Summary: Type Total Ran Passed Failed Inactive 00:23:19.809 suites 1 1 n/a 0 0 00:23:19.809 tests 23 23 23 0 0 00:23:19.809 asserts 152 152 152 0 n/a 00:23:19.809 00:23:19.809 Elapsed time = 1.157 seconds 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.069 rmmod nvme_tcp 00:23:20.069 rmmod nvme_fabrics 00:23:20.069 rmmod nvme_keyring 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 932037 ']' 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 932037 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 932037 ']' 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 932037 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 932037 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 932037' 00:23:20.069 killing process with pid 932037 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 932037 00:23:20.069 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 932037 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.638 01:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.546 00:23:22.546 real 0m6.892s 00:23:22.546 user 0m11.598s 00:23:22.546 sys 0m2.728s 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:22.546 ************************************ 00:23:22.546 END TEST nvmf_bdevio_no_huge 00:23:22.546 ************************************ 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.546 01:41:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.805 ************************************ 00:23:22.805 START TEST nvmf_tls 00:23:22.805 ************************************ 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:22.805 * Looking for test storage... 00:23:22.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:22.805 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.806 --rc genhtml_branch_coverage=1 00:23:22.806 --rc genhtml_function_coverage=1 00:23:22.806 --rc genhtml_legend=1 00:23:22.806 --rc geninfo_all_blocks=1 00:23:22.806 --rc geninfo_unexecuted_blocks=1 00:23:22.806 00:23:22.806 ' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.806 --rc genhtml_branch_coverage=1 00:23:22.806 --rc genhtml_function_coverage=1 00:23:22.806 --rc genhtml_legend=1 00:23:22.806 --rc geninfo_all_blocks=1 00:23:22.806 --rc geninfo_unexecuted_blocks=1 00:23:22.806 00:23:22.806 ' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.806 --rc genhtml_branch_coverage=1 00:23:22.806 --rc genhtml_function_coverage=1 00:23:22.806 --rc genhtml_legend=1 00:23:22.806 --rc geninfo_all_blocks=1 00:23:22.806 --rc geninfo_unexecuted_blocks=1 00:23:22.806 00:23:22.806 ' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.806 --rc genhtml_branch_coverage=1 00:23:22.806 --rc genhtml_function_coverage=1 00:23:22.806 --rc genhtml_legend=1 00:23:22.806 --rc geninfo_all_blocks=1 00:23:22.806 --rc geninfo_unexecuted_blocks=1 00:23:22.806 00:23:22.806 ' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.806 01:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:25.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:25.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:25.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:25.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.342 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:23:25.343 00:23:25.343 --- 10.0.0.2 ping statistics --- 00:23:25.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.343 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:23:25.343 00:23:25.343 --- 10.0.0.1 ping statistics --- 00:23:25.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.343 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=934254 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 934254 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 934254 ']' 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.343 01:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 [2024-10-01 01:41:04.804630] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:25.343 [2024-10-01 01:41:04.804706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.343 [2024-10-01 01:41:04.873096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.343 [2024-10-01 01:41:04.960107] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.343 [2024-10-01 01:41:04.960164] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.343 [2024-10-01 01:41:04.960188] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.343 [2024-10-01 01:41:04.960200] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.343 [2024-10-01 01:41:04.960209] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.343 [2024-10-01 01:41:04.960234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:25.343 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:25.602 true 00:23:25.602 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:25.602 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:25.862 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:25.862 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:25.862 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:26.121 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.121 01:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:26.381 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:26.381 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:26.381 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:26.641 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.641 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:26.899 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:26.899 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:26.899 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.899 01:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:27.469 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:27.469 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:27.469 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:27.469 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:27.469 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:27.820 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:27.820 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:27.820 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:28.081 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:28.081 01:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:23:28.340 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.7H3g3cf1k9 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.HLfiPsVE2r 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7H3g3cf1k9 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.HLfiPsVE2r 00:23:28.598 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:28.856 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:29.114 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.7H3g3cf1k9 00:23:29.114 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7H3g3cf1k9 00:23:29.114 01:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.371 [2024-10-01 01:41:09.075021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.371 01:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:29.628 01:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:29.886 [2024-10-01 01:41:09.604425] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.886 [2024-10-01 01:41:09.604716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.886 01:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:30.144 malloc0 00:23:30.144 01:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.402 01:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7H3g3cf1k9 00:23:30.660 01:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.919 01:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7H3g3cf1k9 00:23:43.137 Initializing NVMe Controllers 00:23:43.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.137 Initialization complete. Launching workers. 00:23:43.137 ======================================================== 00:23:43.137 Latency(us) 00:23:43.137 Device Information : IOPS MiB/s Average min max 00:23:43.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8400.16 32.81 7620.31 1165.35 10718.13 00:23:43.137 ======================================================== 00:23:43.137 Total : 8400.16 32.81 7620.31 1165.35 10718.13 00:23:43.137 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7H3g3cf1k9 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7H3g3cf1k9 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=936164 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 936164 /var/tmp/bdevperf.sock 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 936164 ']' 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.137 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.138 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.138 01:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.138 [2024-10-01 01:41:20.892188] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:43.138 [2024-10-01 01:41:20.892274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936164 ] 00:23:43.138 [2024-10-01 01:41:20.957472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.138 [2024-10-01 01:41:21.049196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.138 01:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.138 01:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:43.138 01:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7H3g3cf1k9 00:23:43.138 01:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.138 [2024-10-01 01:41:21.675689] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.138 TLSTESTn1 00:23:43.138 01:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:43.138 Running I/O for 10 seconds... 00:23:52.455 3350.00 IOPS, 13.09 MiB/s 3490.50 IOPS, 13.63 MiB/s 3509.67 IOPS, 13.71 MiB/s 3521.25 IOPS, 13.75 MiB/s 3537.20 IOPS, 13.82 MiB/s 3547.17 IOPS, 13.86 MiB/s 3545.57 IOPS, 13.85 MiB/s 3550.00 IOPS, 13.87 MiB/s 3548.56 IOPS, 13.86 MiB/s 3542.30 IOPS, 13.84 MiB/s 00:23:52.455 Latency(us) 00:23:52.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.455 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:52.455 Verification LBA range: start 0x0 length 0x2000 00:23:52.456 TLSTESTn1 : 10.04 3541.96 13.84 0.00 0.00 36055.14 10048.85 51263.72 00:23:52.456 =================================================================================================================== 00:23:52.456 Total : 3541.96 13.84 0.00 0.00 36055.14 10048.85 51263.72 00:23:52.456 { 00:23:52.456 "results": [ 00:23:52.456 { 00:23:52.456 "job": "TLSTESTn1", 00:23:52.456 "core_mask": "0x4", 00:23:52.456 "workload": "verify", 00:23:52.456 "status": "finished", 00:23:52.456 "verify_range": { 00:23:52.456 "start": 0, 00:23:52.456 "length": 8192 00:23:52.456 }, 00:23:52.456 "queue_depth": 128, 00:23:52.456 "io_size": 4096, 00:23:52.456 "runtime": 10.036538, 00:23:52.456 "iops": 3541.958392425755, 00:23:52.456 "mibps": 13.835774970413105, 00:23:52.456 "io_failed": 0, 00:23:52.456 "io_timeout": 0, 00:23:52.456 "avg_latency_us": 36055.13816209864, 00:23:52.456 "min_latency_us": 10048.853333333333, 00:23:52.456 "max_latency_us": 51263.71555555556 00:23:52.456 } 00:23:52.456 ], 00:23:52.456 "core_count": 1 00:23:52.456 } 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 936164 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 936164 ']' 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 936164 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 936164 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 936164' 00:23:52.456 killing process with pid 936164 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 936164 00:23:52.456 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.456 00:23:52.456 Latency(us) 00:23:52.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.456 =================================================================================================================== 00:23:52.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.456 01:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 936164 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HLfiPsVE2r 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HLfiPsVE2r 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HLfiPsVE2r 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HLfiPsVE2r 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937475 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937475 /var/tmp/bdevperf.sock 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 937475 ']' 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.456 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.456 [2024-10-01 01:41:32.274684] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:52.456 [2024-10-01 01:41:32.274774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937475 ] 00:23:52.714 [2024-10-01 01:41:32.340409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.714 [2024-10-01 01:41:32.424495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.714 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.714 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.714 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HLfiPsVE2r 00:23:52.972 01:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.231 [2024-10-01 01:41:33.039776] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.231 [2024-10-01 01:41:33.045862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:53.231 [2024-10-01 01:41:33.046845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908e60 (107): Transport endpoint is not connected 00:23:53.231 [2024-10-01 01:41:33.047835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908e60 (9): Bad file descriptor 00:23:53.231 [2024-10-01 01:41:33.048834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:53.231 [2024-10-01 01:41:33.048854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:53.231 [2024-10-01 01:41:33.048876] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:53.231 [2024-10-01 01:41:33.048900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:53.231 request: 00:23:53.231 { 00:23:53.231 "name": "TLSTEST", 00:23:53.231 "trtype": "tcp", 00:23:53.231 "traddr": "10.0.0.2", 00:23:53.231 "adrfam": "ipv4", 00:23:53.231 "trsvcid": "4420", 00:23:53.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.231 "prchk_reftag": false, 00:23:53.231 "prchk_guard": false, 00:23:53.231 "hdgst": false, 00:23:53.231 "ddgst": false, 00:23:53.231 "psk": "key0", 00:23:53.231 "allow_unrecognized_csi": false, 00:23:53.231 "method": "bdev_nvme_attach_controller", 00:23:53.231 "req_id": 1 00:23:53.231 } 00:23:53.231 Got JSON-RPC error response 00:23:53.231 response: 00:23:53.231 { 00:23:53.231 "code": -5, 00:23:53.231 "message": "Input/output error" 00:23:53.231 } 00:23:53.231 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 937475 00:23:53.231 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 937475 ']' 00:23:53.231 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 937475 00:23:53.231 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.231 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.231 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 937475 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 937475' 00:23:53.490 killing process with pid 937475 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 937475 00:23:53.490 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.490 00:23:53.490 Latency(us) 00:23:53.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.490 =================================================================================================================== 00:23:53.490 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 937475 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7H3g3cf1k9 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7H3g3cf1k9 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7H3g3cf1k9 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7H3g3cf1k9 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937606 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937606 /var/tmp/bdevperf.sock 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 937606 ']' 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.490 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.490 [2024-10-01 01:41:33.342206] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:53.490 [2024-10-01 01:41:33.342284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937606 ] 00:23:53.748 [2024-10-01 01:41:33.406273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.748 [2024-10-01 01:41:33.494237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.749 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.749 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.749 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7H3g3cf1k9 00:23:54.007 01:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:54.267 [2024-10-01 01:41:34.109780] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.267 [2024-10-01 01:41:34.118596] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:54.267 [2024-10-01 01:41:34.118626] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:54.267 [2024-10-01 01:41:34.118661] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:54.267 [2024-10-01 01:41:34.119017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f6e60 (107): Transport endpoint is not connected 00:23:54.267 [2024-10-01 01:41:34.120010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f6e60 (9): Bad file descriptor 00:23:54.267 [2024-10-01 01:41:34.121006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:54.267 [2024-10-01 01:41:34.121050] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:54.267 [2024-10-01 01:41:34.121065] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:54.267 [2024-10-01 01:41:34.121084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.527 request: 00:23:54.527 { 00:23:54.527 "name": "TLSTEST", 00:23:54.527 "trtype": "tcp", 00:23:54.527 "traddr": "10.0.0.2", 00:23:54.527 "adrfam": "ipv4", 00:23:54.527 "trsvcid": "4420", 00:23:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.527 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:54.527 "prchk_reftag": false, 00:23:54.527 "prchk_guard": false, 00:23:54.527 "hdgst": false, 00:23:54.527 "ddgst": false, 00:23:54.527 "psk": "key0", 00:23:54.527 "allow_unrecognized_csi": false, 00:23:54.527 "method": "bdev_nvme_attach_controller", 00:23:54.527 "req_id": 1 00:23:54.527 } 00:23:54.527 Got JSON-RPC error response 00:23:54.527 response: 00:23:54.527 { 00:23:54.527 "code": -5, 00:23:54.527 "message": "Input/output error" 00:23:54.527 } 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 937606 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 937606 ']' 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 937606 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 937606 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 937606' 00:23:54.527 killing process with pid 937606 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 937606 00:23:54.527 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.527 00:23:54.527 Latency(us) 00:23:54.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.527 =================================================================================================================== 00:23:54.527 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:54.527 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 937606 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7H3g3cf1k9 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7H3g3cf1k9 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7H3g3cf1k9 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7H3g3cf1k9 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937752 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937752 /var/tmp/bdevperf.sock 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 937752 ']' 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.785 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.785 [2024-10-01 01:41:34.455960] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:54.785 [2024-10-01 01:41:34.456084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937752 ] 00:23:54.785 [2024-10-01 01:41:34.518066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.785 [2024-10-01 01:41:34.603060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.042 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.042 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.042 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7H3g3cf1k9 00:23:55.299 01:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.557 [2024-10-01 01:41:35.234399] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.557 [2024-10-01 01:41:35.241433] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:55.557 [2024-10-01 01:41:35.241461] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:55.557 [2024-10-01 01:41:35.241506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.557 [2024-10-01 01:41:35.241588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3ee60 (107): Transport endpoint is not connected 00:23:55.557 [2024-10-01 01:41:35.242510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3ee60 (9): Bad file descriptor 00:23:55.557 [2024-10-01 01:41:35.243508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:55.557 [2024-10-01 01:41:35.243527] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:55.557 [2024-10-01 01:41:35.243551] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:55.557 [2024-10-01 01:41:35.243569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:55.557 request: 00:23:55.557 { 00:23:55.557 "name": "TLSTEST", 00:23:55.557 "trtype": "tcp", 00:23:55.557 "traddr": "10.0.0.2", 00:23:55.557 "adrfam": "ipv4", 00:23:55.557 "trsvcid": "4420", 00:23:55.557 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:55.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.557 "prchk_reftag": false, 00:23:55.557 "prchk_guard": false, 00:23:55.557 "hdgst": false, 00:23:55.557 "ddgst": false, 00:23:55.557 "psk": "key0", 00:23:55.557 "allow_unrecognized_csi": false, 00:23:55.557 "method": "bdev_nvme_attach_controller", 00:23:55.557 "req_id": 1 00:23:55.557 } 00:23:55.557 Got JSON-RPC error response 00:23:55.557 response: 00:23:55.557 { 00:23:55.557 "code": -5, 00:23:55.557 "message": "Input/output error" 00:23:55.557 } 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 937752 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 937752 ']' 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 937752 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 937752 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 937752' 00:23:55.557 killing process with pid 937752 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 937752 00:23:55.557 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.557 00:23:55.557 Latency(us) 00:23:55.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.557 =================================================================================================================== 00:23:55.557 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.557 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 937752 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937893 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937893 /var/tmp/bdevperf.sock 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 937893 ']' 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.815 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.815 [2024-10-01 01:41:35.542924] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:55.815 [2024-10-01 01:41:35.543040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937893 ] 00:23:55.815 [2024-10-01 01:41:35.602858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.072 [2024-10-01 01:41:35.688760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.072 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.072 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:56.072 01:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:56.330 [2024-10-01 01:41:36.031733] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:56.330 [2024-10-01 01:41:36.031794] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:56.330 request: 00:23:56.330 { 00:23:56.330 "name": "key0", 00:23:56.330 "path": "", 00:23:56.330 "method": "keyring_file_add_key", 00:23:56.330 "req_id": 1 00:23:56.330 } 00:23:56.330 Got JSON-RPC error response 00:23:56.330 response: 00:23:56.330 { 00:23:56.330 "code": -1, 00:23:56.330 "message": "Operation not permitted" 00:23:56.330 } 00:23:56.330 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.590 [2024-10-01 01:41:36.304582] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.590 [2024-10-01 01:41:36.304635] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:56.590 request: 00:23:56.590 { 00:23:56.590 "name": "TLSTEST", 00:23:56.590 "trtype": "tcp", 00:23:56.590 "traddr": "10.0.0.2", 00:23:56.590 "adrfam": "ipv4", 00:23:56.590 "trsvcid": "4420", 00:23:56.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.590 "prchk_reftag": false, 00:23:56.590 "prchk_guard": false, 00:23:56.590 "hdgst": false, 00:23:56.590 "ddgst": false, 00:23:56.590 "psk": "key0", 00:23:56.590 "allow_unrecognized_csi": false, 00:23:56.590 "method": "bdev_nvme_attach_controller", 00:23:56.590 "req_id": 1 00:23:56.590 } 00:23:56.590 Got JSON-RPC error response 00:23:56.590 response: 00:23:56.590 { 00:23:56.590 "code": -126, 00:23:56.590 "message": "Required key not available" 00:23:56.590 } 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 937893 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 937893 ']' 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 937893 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 937893 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 937893' 00:23:56.590 killing process with pid 937893 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 937893 00:23:56.590 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.590 00:23:56.590 Latency(us) 00:23:56.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.590 =================================================================================================================== 00:23:56.590 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.590 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 937893 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 934254 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 934254 ']' 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 934254 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934254 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934254' 00:23:56.849 killing process with pid 934254 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 934254 00:23:56.849 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 934254 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.F4dj8o2U6Z 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.F4dj8o2U6Z 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=938161 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 938161 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 938161 ']' 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.107 01:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.367 [2024-10-01 01:41:36.980901] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:57.367 [2024-10-01 01:41:36.980995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.367 [2024-10-01 01:41:37.048747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.367 [2024-10-01 01:41:37.143194] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.367 [2024-10-01 01:41:37.143258] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.367 [2024-10-01 01:41:37.143287] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.367 [2024-10-01 01:41:37.143301] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.367 [2024-10-01 01:41:37.143313] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.367 [2024-10-01 01:41:37.143345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.F4dj8o2U6Z 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F4dj8o2U6Z 00:23:57.625 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.884 [2024-10-01 01:41:37.539727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.884 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.144 01:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:58.403 [2024-10-01 01:41:38.089198] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.403 [2024-10-01 01:41:38.089482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.403 01:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.661 malloc0 00:23:58.661 01:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.919 01:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:23:59.177 01:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F4dj8o2U6Z 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F4dj8o2U6Z 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=938445 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 938445 /var/tmp/bdevperf.sock 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 938445 ']' 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.435 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.435 [2024-10-01 01:41:39.243399] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:23:59.435 [2024-10-01 01:41:39.243473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938445 ] 00:23:59.692 [2024-10-01 01:41:39.302298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.692 [2024-10-01 01:41:39.385398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.692 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.692 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:59.692 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:23:59.950 01:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.209 [2024-10-01 01:41:40.012208] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.469 TLSTESTn1 00:24:00.469 01:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:00.469 Running I/O for 10 seconds... 00:24:10.663 3339.00 IOPS, 13.04 MiB/s 3446.00 IOPS, 13.46 MiB/s 3426.00 IOPS, 13.38 MiB/s 3424.50 IOPS, 13.38 MiB/s 3445.00 IOPS, 13.46 MiB/s 3452.67 IOPS, 13.49 MiB/s 3457.14 IOPS, 13.50 MiB/s 3459.75 IOPS, 13.51 MiB/s 3466.00 IOPS, 13.54 MiB/s 3467.70 IOPS, 13.55 MiB/s 00:24:10.663 Latency(us) 00:24:10.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.663 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.663 Verification LBA range: start 0x0 length 0x2000 00:24:10.663 TLSTESTn1 : 10.02 3472.33 13.56 0.00 0.00 36796.66 10437.21 40195.41 00:24:10.663 =================================================================================================================== 00:24:10.663 Total : 3472.33 13.56 0.00 0.00 36796.66 10437.21 40195.41 00:24:10.663 { 00:24:10.663 "results": [ 00:24:10.663 { 00:24:10.663 "job": "TLSTESTn1", 00:24:10.663 "core_mask": "0x4", 00:24:10.663 "workload": "verify", 00:24:10.663 "status": "finished", 00:24:10.663 "verify_range": { 00:24:10.663 "start": 0, 00:24:10.663 "length": 8192 00:24:10.663 }, 00:24:10.663 "queue_depth": 128, 00:24:10.663 "io_size": 4096, 00:24:10.663 "runtime": 10.023238, 00:24:10.663 "iops": 3472.33099722864, 00:24:10.663 "mibps": 13.563792957924376, 00:24:10.663 "io_failed": 0, 00:24:10.663 "io_timeout": 0, 00:24:10.663 "avg_latency_us": 36796.6645475403, 00:24:10.663 "min_latency_us": 10437.214814814815, 00:24:10.663 "max_latency_us": 40195.41333333333 00:24:10.663 } 00:24:10.663 ], 00:24:10.663 "core_count": 1 00:24:10.663 } 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 938445 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 938445 ']' 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 938445 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 938445 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 938445' 00:24:10.663 killing process with pid 938445 00:24:10.663 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 938445 00:24:10.664 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.664 00:24:10.664 Latency(us) 00:24:10.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.664 =================================================================================================================== 00:24:10.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.664 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 938445 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.F4dj8o2U6Z 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F4dj8o2U6Z 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F4dj8o2U6Z 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F4dj8o2U6Z 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F4dj8o2U6Z 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=939735 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 939735 /var/tmp/bdevperf.sock 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 939735 ']' 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.925 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.925 [2024-10-01 01:41:50.610349] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:10.925 [2024-10-01 01:41:50.610446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939735 ] 00:24:10.925 [2024-10-01 01:41:50.680797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.925 [2024-10-01 01:41:50.771962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.184 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.184 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:11.184 01:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:11.442 [2024-10-01 01:41:51.197856] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.F4dj8o2U6Z': 0100666 00:24:11.442 [2024-10-01 01:41:51.197898] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:11.442 request: 00:24:11.442 { 00:24:11.442 "name": "key0", 00:24:11.442 "path": "/tmp/tmp.F4dj8o2U6Z", 00:24:11.442 "method": "keyring_file_add_key", 00:24:11.442 "req_id": 1 00:24:11.442 } 00:24:11.442 Got JSON-RPC error response 00:24:11.442 response: 00:24:11.442 { 00:24:11.442 "code": -1, 00:24:11.442 "message": "Operation not permitted" 00:24:11.442 } 00:24:11.442 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.700 [2024-10-01 01:41:51.462670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.700 [2024-10-01 01:41:51.462722] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:11.700 request: 00:24:11.700 { 00:24:11.700 "name": "TLSTEST", 00:24:11.700 "trtype": "tcp", 00:24:11.700 "traddr": "10.0.0.2", 00:24:11.700 "adrfam": "ipv4", 00:24:11.700 "trsvcid": "4420", 00:24:11.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.701 "prchk_reftag": false, 00:24:11.701 "prchk_guard": false, 00:24:11.701 "hdgst": false, 00:24:11.701 "ddgst": false, 00:24:11.701 "psk": "key0", 00:24:11.701 "allow_unrecognized_csi": false, 00:24:11.701 "method": "bdev_nvme_attach_controller", 00:24:11.701 "req_id": 1 00:24:11.701 } 00:24:11.701 Got JSON-RPC error response 00:24:11.701 response: 00:24:11.701 { 00:24:11.701 "code": -126, 00:24:11.701 "message": "Required key not available" 00:24:11.701 } 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 939735 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 939735 ']' 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 939735 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 939735 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 939735' 00:24:11.701 killing process with pid 939735 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 939735 00:24:11.701 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.701 00:24:11.701 Latency(us) 00:24:11.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.701 =================================================================================================================== 00:24:11.701 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:11.701 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 939735 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 938161 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 938161 ']' 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 938161 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 938161 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 938161' 00:24:11.959 killing process with pid 938161 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 938161 00:24:11.959 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 938161 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=939919 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:12.219 01:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 939919 00:24:12.219 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 939919 ']' 00:24:12.219 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.219 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.219 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.219 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.219 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.219 [2024-10-01 01:41:52.052070] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:12.219 [2024-10-01 01:41:52.052165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.477 [2024-10-01 01:41:52.123927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.478 [2024-10-01 01:41:52.210658] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.478 [2024-10-01 01:41:52.210728] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.478 [2024-10-01 01:41:52.210754] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.478 [2024-10-01 01:41:52.210776] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.478 [2024-10-01 01:41:52.210789] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.478 [2024-10-01 01:41:52.210826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.478 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.478 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.478 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:12.478 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.478 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.F4dj8o2U6Z 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.F4dj8o2U6Z 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.F4dj8o2U6Z 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F4dj8o2U6Z 00:24:12.736 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:12.993 [2024-10-01 01:41:52.595333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.993 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:13.251 01:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:13.510 [2024-10-01 01:41:53.124752] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.510 [2024-10-01 01:41:53.125034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.510 01:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:13.768 malloc0 00:24:13.768 01:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.026 01:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:14.285 [2024-10-01 01:41:53.925707] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.F4dj8o2U6Z': 0100666 00:24:14.285 [2024-10-01 01:41:53.925757] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:14.285 request: 00:24:14.285 { 00:24:14.285 "name": "key0", 00:24:14.285 "path": "/tmp/tmp.F4dj8o2U6Z", 00:24:14.285 "method": "keyring_file_add_key", 00:24:14.285 "req_id": 1 00:24:14.285 } 00:24:14.285 Got JSON-RPC error response 00:24:14.285 response: 00:24:14.285 { 00:24:14.285 "code": -1, 00:24:14.285 "message": "Operation not permitted" 00:24:14.285 } 00:24:14.285 01:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:14.544 [2024-10-01 01:41:54.186431] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:14.544 [2024-10-01 01:41:54.186495] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:14.544 request: 00:24:14.544 { 00:24:14.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.544 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.544 "psk": "key0", 00:24:14.544 "method": "nvmf_subsystem_add_host", 00:24:14.544 "req_id": 1 00:24:14.544 } 00:24:14.544 Got JSON-RPC error response 00:24:14.544 response: 00:24:14.544 { 00:24:14.544 "code": -32603, 00:24:14.544 "message": "Internal error" 00:24:14.544 } 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 939919 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 939919 ']' 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 939919 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.544 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.545 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 939919 00:24:14.545 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.545 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.545 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 939919' 00:24:14.545 killing process with pid 939919 00:24:14.545 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 939919 00:24:14.545 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 939919 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.F4dj8o2U6Z 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=940214 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 940214 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 940214 ']' 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.804 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.804 [2024-10-01 01:41:54.565791] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:14.804 [2024-10-01 01:41:54.565885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.804 [2024-10-01 01:41:54.634922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.063 [2024-10-01 01:41:54.734806] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.063 [2024-10-01 01:41:54.734868] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.063 [2024-10-01 01:41:54.734882] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.063 [2024-10-01 01:41:54.734894] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.063 [2024-10-01 01:41:54.734903] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.063 [2024-10-01 01:41:54.734931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.F4dj8o2U6Z 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F4dj8o2U6Z 00:24:15.063 01:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:15.629 [2024-10-01 01:41:55.179172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.629 01:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:15.888 01:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.146 [2024-10-01 01:41:55.792829] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.146 [2024-10-01 01:41:55.793125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.146 01:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:16.404 malloc0 00:24:16.404 01:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:16.663 01:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:16.920 01:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=940511 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 940511 /var/tmp/bdevperf.sock 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 940511 ']' 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.178 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.435 [2024-10-01 01:41:57.064896] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:17.435 [2024-10-01 01:41:57.064987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940511 ] 00:24:17.435 [2024-10-01 01:41:57.123877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.435 [2024-10-01 01:41:57.208594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.693 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.693 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:17.693 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:17.951 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.209 [2024-10-01 01:41:57.820232] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.209 TLSTESTn1 00:24:18.209 01:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:18.467 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:18.467 "subsystems": [ 00:24:18.467 { 00:24:18.467 "subsystem": "keyring", 00:24:18.467 "config": [ 00:24:18.467 { 00:24:18.467 "method": "keyring_file_add_key", 00:24:18.467 "params": { 00:24:18.467 "name": "key0", 00:24:18.467 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:18.467 } 00:24:18.467 } 00:24:18.467 ] 00:24:18.467 }, 00:24:18.467 { 00:24:18.467 "subsystem": "iobuf", 00:24:18.467 "config": [ 00:24:18.467 { 00:24:18.467 "method": "iobuf_set_options", 00:24:18.467 "params": { 00:24:18.467 "small_pool_count": 8192, 00:24:18.467 "large_pool_count": 1024, 00:24:18.467 "small_bufsize": 8192, 00:24:18.467 "large_bufsize": 135168 00:24:18.467 } 00:24:18.467 } 00:24:18.467 ] 00:24:18.467 }, 00:24:18.467 { 00:24:18.467 "subsystem": "sock", 00:24:18.467 "config": [ 00:24:18.467 { 00:24:18.467 "method": "sock_set_default_impl", 00:24:18.467 "params": { 00:24:18.467 "impl_name": "posix" 00:24:18.467 } 00:24:18.467 }, 00:24:18.467 { 00:24:18.467 "method": "sock_impl_set_options", 00:24:18.467 "params": { 00:24:18.467 "impl_name": "ssl", 00:24:18.467 "recv_buf_size": 4096, 00:24:18.468 "send_buf_size": 4096, 00:24:18.468 "enable_recv_pipe": true, 00:24:18.468 "enable_quickack": false, 00:24:18.468 "enable_placement_id": 0, 00:24:18.468 "enable_zerocopy_send_server": true, 00:24:18.468 "enable_zerocopy_send_client": false, 00:24:18.468 "zerocopy_threshold": 0, 00:24:18.468 "tls_version": 0, 00:24:18.468 "enable_ktls": false 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "sock_impl_set_options", 00:24:18.468 "params": { 00:24:18.468 "impl_name": "posix", 00:24:18.468 "recv_buf_size": 2097152, 00:24:18.468 "send_buf_size": 2097152, 00:24:18.468 "enable_recv_pipe": true, 00:24:18.468 "enable_quickack": false, 00:24:18.468 "enable_placement_id": 0, 00:24:18.468 "enable_zerocopy_send_server": true, 00:24:18.468 "enable_zerocopy_send_client": false, 00:24:18.468 "zerocopy_threshold": 0, 00:24:18.468 "tls_version": 0, 00:24:18.468 "enable_ktls": false 00:24:18.468 } 00:24:18.468 } 00:24:18.468 ] 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "subsystem": "vmd", 00:24:18.468 "config": [] 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "subsystem": "accel", 00:24:18.468 "config": [ 00:24:18.468 { 00:24:18.468 "method": "accel_set_options", 00:24:18.468 "params": { 00:24:18.468 "small_cache_size": 128, 00:24:18.468 "large_cache_size": 16, 00:24:18.468 "task_count": 2048, 00:24:18.468 "sequence_count": 2048, 00:24:18.468 "buf_count": 2048 00:24:18.468 } 00:24:18.468 } 00:24:18.468 ] 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "subsystem": "bdev", 00:24:18.468 "config": [ 00:24:18.468 { 00:24:18.468 "method": "bdev_set_options", 00:24:18.468 "params": { 00:24:18.468 "bdev_io_pool_size": 65535, 00:24:18.468 "bdev_io_cache_size": 256, 00:24:18.468 "bdev_auto_examine": true, 00:24:18.468 "iobuf_small_cache_size": 128, 00:24:18.468 "iobuf_large_cache_size": 16 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "bdev_raid_set_options", 00:24:18.468 "params": { 00:24:18.468 "process_window_size_kb": 1024, 00:24:18.468 "process_max_bandwidth_mb_sec": 0 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "bdev_iscsi_set_options", 00:24:18.468 "params": { 00:24:18.468 "timeout_sec": 30 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "bdev_nvme_set_options", 00:24:18.468 "params": { 00:24:18.468 "action_on_timeout": "none", 00:24:18.468 "timeout_us": 0, 00:24:18.468 "timeout_admin_us": 0, 00:24:18.468 "keep_alive_timeout_ms": 10000, 00:24:18.468 "arbitration_burst": 0, 00:24:18.468 "low_priority_weight": 0, 00:24:18.468 "medium_priority_weight": 0, 00:24:18.468 "high_priority_weight": 0, 00:24:18.468 "nvme_adminq_poll_period_us": 10000, 00:24:18.468 "nvme_ioq_poll_period_us": 0, 00:24:18.468 "io_queue_requests": 0, 00:24:18.468 "delay_cmd_submit": true, 00:24:18.468 "transport_retry_count": 4, 00:24:18.468 "bdev_retry_count": 3, 00:24:18.468 "transport_ack_timeout": 0, 00:24:18.468 "ctrlr_loss_timeout_sec": 0, 00:24:18.468 "reconnect_delay_sec": 0, 00:24:18.468 "fast_io_fail_timeout_sec": 0, 00:24:18.468 "disable_auto_failback": false, 00:24:18.468 "generate_uuids": false, 00:24:18.468 "transport_tos": 0, 00:24:18.468 "nvme_error_stat": false, 00:24:18.468 "rdma_srq_size": 0, 00:24:18.468 "io_path_stat": false, 00:24:18.468 "allow_accel_sequence": false, 00:24:18.468 "rdma_max_cq_size": 0, 00:24:18.468 "rdma_cm_event_timeout_ms": 0, 00:24:18.468 "dhchap_digests": [ 00:24:18.468 "sha256", 00:24:18.468 "sha384", 00:24:18.468 "sha512" 00:24:18.468 ], 00:24:18.468 "dhchap_dhgroups": [ 00:24:18.468 "null", 00:24:18.468 "ffdhe2048", 00:24:18.468 "ffdhe3072", 00:24:18.468 "ffdhe4096", 00:24:18.468 "ffdhe6144", 00:24:18.468 "ffdhe8192" 00:24:18.468 ] 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "bdev_nvme_set_hotplug", 00:24:18.468 "params": { 00:24:18.468 "period_us": 100000, 00:24:18.468 "enable": false 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "bdev_malloc_create", 00:24:18.468 "params": { 00:24:18.468 "name": "malloc0", 00:24:18.468 "num_blocks": 8192, 00:24:18.468 "block_size": 4096, 00:24:18.468 "physical_block_size": 4096, 00:24:18.468 "uuid": "2ef68aa8-29ce-45e7-bdcf-8a738eac612c", 00:24:18.468 "optimal_io_boundary": 0, 00:24:18.468 "md_size": 0, 00:24:18.468 "dif_type": 0, 00:24:18.468 "dif_is_head_of_md": false, 00:24:18.468 "dif_pi_format": 0 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "bdev_wait_for_examine" 00:24:18.468 } 00:24:18.468 ] 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "subsystem": "nbd", 00:24:18.468 "config": [] 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "subsystem": "scheduler", 00:24:18.468 "config": [ 00:24:18.468 { 00:24:18.468 "method": "framework_set_scheduler", 00:24:18.468 "params": { 00:24:18.468 "name": "static" 00:24:18.468 } 00:24:18.468 } 00:24:18.468 ] 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "subsystem": "nvmf", 00:24:18.468 "config": [ 00:24:18.468 { 00:24:18.468 "method": "nvmf_set_config", 00:24:18.468 "params": { 00:24:18.468 "discovery_filter": "match_any", 00:24:18.468 "admin_cmd_passthru": { 00:24:18.468 "identify_ctrlr": false 00:24:18.468 }, 00:24:18.468 "dhchap_digests": [ 00:24:18.468 "sha256", 00:24:18.468 "sha384", 00:24:18.468 "sha512" 00:24:18.468 ], 00:24:18.468 "dhchap_dhgroups": [ 00:24:18.468 "null", 00:24:18.468 "ffdhe2048", 00:24:18.468 "ffdhe3072", 00:24:18.468 "ffdhe4096", 00:24:18.468 "ffdhe6144", 00:24:18.468 "ffdhe8192" 00:24:18.468 ] 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "nvmf_set_max_subsystems", 00:24:18.468 "params": { 00:24:18.468 "max_subsystems": 1024 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "nvmf_set_crdt", 00:24:18.468 "params": { 00:24:18.468 "crdt1": 0, 00:24:18.468 "crdt2": 0, 00:24:18.468 "crdt3": 0 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "nvmf_create_transport", 00:24:18.468 "params": { 00:24:18.468 "trtype": "TCP", 00:24:18.468 "max_queue_depth": 128, 00:24:18.468 "max_io_qpairs_per_ctrlr": 127, 00:24:18.468 "in_capsule_data_size": 4096, 00:24:18.468 "max_io_size": 131072, 00:24:18.468 "io_unit_size": 131072, 00:24:18.468 "max_aq_depth": 128, 00:24:18.468 "num_shared_buffers": 511, 00:24:18.468 "buf_cache_size": 4294967295, 00:24:18.468 "dif_insert_or_strip": false, 00:24:18.468 "zcopy": false, 00:24:18.468 "c2h_success": false, 00:24:18.468 "sock_priority": 0, 00:24:18.468 "abort_timeout_sec": 1, 00:24:18.468 "ack_timeout": 0, 00:24:18.468 "data_wr_pool_size": 0 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "nvmf_create_subsystem", 00:24:18.468 "params": { 00:24:18.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.468 "allow_any_host": false, 00:24:18.468 "serial_number": "SPDK00000000000001", 00:24:18.468 "model_number": "SPDK bdev Controller", 00:24:18.468 "max_namespaces": 10, 00:24:18.468 "min_cntlid": 1, 00:24:18.468 "max_cntlid": 65519, 00:24:18.468 "ana_reporting": false 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "nvmf_subsystem_add_host", 00:24:18.468 "params": { 00:24:18.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.468 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.468 "psk": "key0" 00:24:18.468 } 00:24:18.468 }, 00:24:18.468 { 00:24:18.468 "method": "nvmf_subsystem_add_ns", 00:24:18.468 "params": { 00:24:18.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.469 "namespace": { 00:24:18.469 "nsid": 1, 00:24:18.469 "bdev_name": "malloc0", 00:24:18.469 "nguid": "2EF68AA829CE45E7BDCF8A738EAC612C", 00:24:18.469 "uuid": "2ef68aa8-29ce-45e7-bdcf-8a738eac612c", 00:24:18.469 "no_auto_visible": false 00:24:18.469 } 00:24:18.469 } 00:24:18.469 }, 00:24:18.469 { 00:24:18.469 "method": "nvmf_subsystem_add_listener", 00:24:18.469 "params": { 00:24:18.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.469 "listen_address": { 00:24:18.469 "trtype": "TCP", 00:24:18.469 "adrfam": "IPv4", 00:24:18.469 "traddr": "10.0.0.2", 00:24:18.469 "trsvcid": "4420" 00:24:18.469 }, 00:24:18.469 "secure_channel": true 00:24:18.469 } 00:24:18.469 } 00:24:18.469 ] 00:24:18.469 } 00:24:18.469 ] 00:24:18.469 }' 00:24:18.469 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:19.034 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:19.034 "subsystems": [ 00:24:19.034 { 00:24:19.034 "subsystem": "keyring", 00:24:19.034 "config": [ 00:24:19.034 { 00:24:19.034 "method": "keyring_file_add_key", 00:24:19.034 "params": { 00:24:19.034 "name": "key0", 00:24:19.034 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:19.034 } 00:24:19.034 } 00:24:19.034 ] 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "subsystem": "iobuf", 00:24:19.034 "config": [ 00:24:19.034 { 00:24:19.034 "method": "iobuf_set_options", 00:24:19.034 "params": { 00:24:19.034 "small_pool_count": 8192, 00:24:19.034 "large_pool_count": 1024, 00:24:19.034 "small_bufsize": 8192, 00:24:19.034 "large_bufsize": 135168 00:24:19.034 } 00:24:19.034 } 00:24:19.034 ] 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "subsystem": "sock", 00:24:19.034 "config": [ 00:24:19.034 { 00:24:19.034 "method": "sock_set_default_impl", 00:24:19.034 "params": { 00:24:19.034 "impl_name": "posix" 00:24:19.034 } 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "method": "sock_impl_set_options", 00:24:19.034 "params": { 00:24:19.034 "impl_name": "ssl", 00:24:19.034 "recv_buf_size": 4096, 00:24:19.034 "send_buf_size": 4096, 00:24:19.034 "enable_recv_pipe": true, 00:24:19.034 "enable_quickack": false, 00:24:19.034 "enable_placement_id": 0, 00:24:19.034 "enable_zerocopy_send_server": true, 00:24:19.034 "enable_zerocopy_send_client": false, 00:24:19.034 "zerocopy_threshold": 0, 00:24:19.034 "tls_version": 0, 00:24:19.034 "enable_ktls": false 00:24:19.034 } 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "method": "sock_impl_set_options", 00:24:19.034 "params": { 00:24:19.034 "impl_name": "posix", 00:24:19.034 "recv_buf_size": 2097152, 00:24:19.034 "send_buf_size": 2097152, 00:24:19.034 "enable_recv_pipe": true, 00:24:19.034 "enable_quickack": false, 00:24:19.034 "enable_placement_id": 0, 00:24:19.034 "enable_zerocopy_send_server": true, 00:24:19.034 "enable_zerocopy_send_client": false, 00:24:19.034 "zerocopy_threshold": 0, 00:24:19.034 "tls_version": 0, 00:24:19.034 "enable_ktls": false 00:24:19.034 } 00:24:19.034 } 00:24:19.034 ] 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "subsystem": "vmd", 00:24:19.034 "config": [] 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "subsystem": "accel", 00:24:19.034 "config": [ 00:24:19.034 { 00:24:19.034 "method": "accel_set_options", 00:24:19.034 "params": { 00:24:19.034 "small_cache_size": 128, 00:24:19.034 "large_cache_size": 16, 00:24:19.034 "task_count": 2048, 00:24:19.034 "sequence_count": 2048, 00:24:19.034 "buf_count": 2048 00:24:19.034 } 00:24:19.034 } 00:24:19.034 ] 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "subsystem": "bdev", 00:24:19.034 "config": [ 00:24:19.034 { 00:24:19.034 "method": "bdev_set_options", 00:24:19.034 "params": { 00:24:19.034 "bdev_io_pool_size": 65535, 00:24:19.034 "bdev_io_cache_size": 256, 00:24:19.034 "bdev_auto_examine": true, 00:24:19.034 "iobuf_small_cache_size": 128, 00:24:19.034 "iobuf_large_cache_size": 16 00:24:19.034 } 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "method": "bdev_raid_set_options", 00:24:19.034 "params": { 00:24:19.034 "process_window_size_kb": 1024, 00:24:19.034 "process_max_bandwidth_mb_sec": 0 00:24:19.034 } 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "method": "bdev_iscsi_set_options", 00:24:19.034 "params": { 00:24:19.034 "timeout_sec": 30 00:24:19.034 } 00:24:19.034 }, 00:24:19.034 { 00:24:19.034 "method": "bdev_nvme_set_options", 00:24:19.034 "params": { 00:24:19.034 "action_on_timeout": "none", 00:24:19.034 "timeout_us": 0, 00:24:19.034 "timeout_admin_us": 0, 00:24:19.034 "keep_alive_timeout_ms": 10000, 00:24:19.035 "arbitration_burst": 0, 00:24:19.035 "low_priority_weight": 0, 00:24:19.035 "medium_priority_weight": 0, 00:24:19.035 "high_priority_weight": 0, 00:24:19.035 "nvme_adminq_poll_period_us": 10000, 00:24:19.035 "nvme_ioq_poll_period_us": 0, 00:24:19.035 "io_queue_requests": 512, 00:24:19.035 "delay_cmd_submit": true, 00:24:19.035 "transport_retry_count": 4, 00:24:19.035 "bdev_retry_count": 3, 00:24:19.035 "transport_ack_timeout": 0, 00:24:19.035 "ctrlr_loss_timeout_sec": 0, 00:24:19.035 "reconnect_delay_sec": 0, 00:24:19.035 "fast_io_fail_timeout_sec": 0, 00:24:19.035 "disable_auto_failback": false, 00:24:19.035 "generate_uuids": false, 00:24:19.035 "transport_tos": 0, 00:24:19.035 "nvme_error_stat": false, 00:24:19.035 "rdma_srq_size": 0, 00:24:19.035 "io_path_stat": false, 00:24:19.035 "allow_accel_sequence": false, 00:24:19.035 "rdma_max_cq_size": 0, 00:24:19.035 "rdma_cm_event_timeout_ms": 0, 00:24:19.035 "dhchap_digests": [ 00:24:19.035 "sha256", 00:24:19.035 "sha384", 00:24:19.035 "sha512" 00:24:19.035 ], 00:24:19.035 "dhchap_dhgroups": [ 00:24:19.035 "null", 00:24:19.035 "ffdhe2048", 00:24:19.035 "ffdhe3072", 00:24:19.035 "ffdhe4096", 00:24:19.035 "ffdhe6144", 00:24:19.035 "ffdhe8192" 00:24:19.035 ] 00:24:19.035 } 00:24:19.035 }, 00:24:19.035 { 00:24:19.035 "method": "bdev_nvme_attach_controller", 00:24:19.035 "params": { 00:24:19.035 "name": "TLSTEST", 00:24:19.035 "trtype": "TCP", 00:24:19.035 "adrfam": "IPv4", 00:24:19.035 "traddr": "10.0.0.2", 00:24:19.035 "trsvcid": "4420", 00:24:19.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.035 "prchk_reftag": false, 00:24:19.035 "prchk_guard": false, 00:24:19.035 "ctrlr_loss_timeout_sec": 0, 00:24:19.035 "reconnect_delay_sec": 0, 00:24:19.035 "fast_io_fail_timeout_sec": 0, 00:24:19.035 "psk": "key0", 00:24:19.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.035 "hdgst": false, 00:24:19.035 "ddgst": false 00:24:19.035 } 00:24:19.035 }, 00:24:19.035 { 00:24:19.035 "method": "bdev_nvme_set_hotplug", 00:24:19.035 "params": { 00:24:19.035 "period_us": 100000, 00:24:19.035 "enable": false 00:24:19.035 } 00:24:19.035 }, 00:24:19.035 { 00:24:19.035 "method": "bdev_wait_for_examine" 00:24:19.035 } 00:24:19.035 ] 00:24:19.035 }, 00:24:19.035 { 00:24:19.035 "subsystem": "nbd", 00:24:19.035 "config": [] 00:24:19.035 } 00:24:19.035 ] 00:24:19.035 }' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 940511 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 940511 ']' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 940511 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 940511 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 940511' 00:24:19.035 killing process with pid 940511 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 940511 00:24:19.035 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.035 00:24:19.035 Latency(us) 00:24:19.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.035 =================================================================================================================== 00:24:19.035 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 940511 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 940214 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 940214 ']' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 940214 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 940214 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 940214' 00:24:19.035 killing process with pid 940214 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 940214 00:24:19.035 01:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 940214 00:24:19.298 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:19.298 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:19.298 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:19.298 "subsystems": [ 00:24:19.298 { 00:24:19.298 "subsystem": "keyring", 00:24:19.298 "config": [ 00:24:19.298 { 00:24:19.298 "method": "keyring_file_add_key", 00:24:19.298 "params": { 00:24:19.298 "name": "key0", 00:24:19.298 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:19.298 } 00:24:19.298 } 00:24:19.298 ] 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "subsystem": "iobuf", 00:24:19.298 "config": [ 00:24:19.298 { 00:24:19.298 "method": "iobuf_set_options", 00:24:19.298 "params": { 00:24:19.298 "small_pool_count": 8192, 00:24:19.298 "large_pool_count": 1024, 00:24:19.298 "small_bufsize": 8192, 00:24:19.298 "large_bufsize": 135168 00:24:19.298 } 00:24:19.298 } 00:24:19.298 ] 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "subsystem": "sock", 00:24:19.298 "config": [ 00:24:19.298 { 00:24:19.298 "method": "sock_set_default_impl", 00:24:19.298 "params": { 00:24:19.298 "impl_name": "posix" 00:24:19.298 } 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "method": "sock_impl_set_options", 00:24:19.298 "params": { 00:24:19.298 "impl_name": "ssl", 00:24:19.298 "recv_buf_size": 4096, 00:24:19.298 "send_buf_size": 4096, 00:24:19.298 "enable_recv_pipe": true, 00:24:19.298 "enable_quickack": false, 00:24:19.298 "enable_placement_id": 0, 00:24:19.298 "enable_zerocopy_send_server": true, 00:24:19.298 "enable_zerocopy_send_client": false, 00:24:19.298 "zerocopy_threshold": 0, 00:24:19.298 "tls_version": 0, 00:24:19.298 "enable_ktls": false 00:24:19.298 } 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "method": "sock_impl_set_options", 00:24:19.298 "params": { 00:24:19.298 "impl_name": "posix", 00:24:19.298 "recv_buf_size": 2097152, 00:24:19.298 "send_buf_size": 2097152, 00:24:19.298 "enable_recv_pipe": true, 00:24:19.298 "enable_quickack": false, 00:24:19.298 "enable_placement_id": 0, 00:24:19.298 "enable_zerocopy_send_server": true, 00:24:19.298 "enable_zerocopy_send_client": false, 00:24:19.298 "zerocopy_threshold": 0, 00:24:19.298 "tls_version": 0, 00:24:19.298 "enable_ktls": false 00:24:19.298 } 00:24:19.298 } 00:24:19.298 ] 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "subsystem": "vmd", 00:24:19.298 "config": [] 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "subsystem": "accel", 00:24:19.298 "config": [ 00:24:19.298 { 00:24:19.298 "method": "accel_set_options", 00:24:19.298 "params": { 00:24:19.298 "small_cache_size": 128, 00:24:19.298 "large_cache_size": 16, 00:24:19.298 "task_count": 2048, 00:24:19.298 "sequence_count": 2048, 00:24:19.298 "buf_count": 2048 00:24:19.298 } 00:24:19.298 } 00:24:19.298 ] 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "subsystem": "bdev", 00:24:19.298 "config": [ 00:24:19.298 { 00:24:19.298 "method": "bdev_set_options", 00:24:19.298 "params": { 00:24:19.298 "bdev_io_pool_size": 65535, 00:24:19.298 "bdev_io_cache_size": 256, 00:24:19.298 "bdev_auto_examine": true, 00:24:19.298 "iobuf_small_cache_size": 128, 00:24:19.298 "iobuf_large_cache_size": 16 00:24:19.298 } 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "method": "bdev_raid_set_options", 00:24:19.298 "params": { 00:24:19.298 "process_window_size_kb": 1024, 00:24:19.298 "process_max_bandwidth_mb_sec": 0 00:24:19.298 } 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "method": "bdev_iscsi_set_options", 00:24:19.298 "params": { 00:24:19.298 "timeout_sec": 30 00:24:19.298 } 00:24:19.298 }, 00:24:19.298 { 00:24:19.298 "method": "bdev_nvme_set_options", 00:24:19.298 "params": { 00:24:19.298 "action_on_timeout": "none", 00:24:19.298 "timeout_us": 0, 00:24:19.298 "timeout_admin_us": 0, 00:24:19.298 "keep_alive_timeout_ms": 10000, 00:24:19.298 "arbitration_burst": 0, 00:24:19.298 "low_priority_weight": 0, 00:24:19.298 "medium_priority_weight": 0, 00:24:19.298 "high_priority_weight": 0, 00:24:19.298 "nvme_adminq_poll_period_us": 10000, 00:24:19.298 "nvme_ioq_poll_period_us": 0, 00:24:19.298 "io_queue_requests": 0, 00:24:19.298 "delay_cmd_submit": true, 00:24:19.298 "transport_retry_count": 4, 00:24:19.298 "bdev_retry_count": 3, 00:24:19.298 "transport_ack_timeout": 0, 00:24:19.298 "ctrlr_loss_timeout_sec": 0, 00:24:19.298 "reconnect_delay_sec": 0, 00:24:19.298 "fast_io_fail_timeout_sec": 0, 00:24:19.298 "disable_auto_failback": false, 00:24:19.298 "generate_uuids": false, 00:24:19.298 "transport_tos": 0, 00:24:19.299 "nvme_error_stat": false, 00:24:19.299 "rdma_srq_size": 0, 00:24:19.299 "io_path_stat": false, 00:24:19.299 "allow_accel_sequence": false, 00:24:19.299 "rdma_max_cq_size": 0, 00:24:19.299 "rdma_cm_event_timeout_ms": 0, 00:24:19.299 "dhchap_digests": [ 00:24:19.299 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.299 "sha256", 00:24:19.299 "sha384", 00:24:19.299 "sha512" 00:24:19.299 ], 00:24:19.299 "dhchap_dhgroups": [ 00:24:19.299 "null", 00:24:19.299 "ffdhe2048", 00:24:19.299 "ffdhe3072", 00:24:19.299 "ffdhe4096", 00:24:19.299 "ffdhe6144", 00:24:19.299 "ffdhe8192" 00:24:19.299 ] 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "bdev_nvme_set_hotplug", 00:24:19.299 "params": { 00:24:19.299 "period_us": 100000, 00:24:19.299 "enable": false 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "bdev_malloc_create", 00:24:19.299 "params": { 00:24:19.299 "name": "malloc0", 00:24:19.299 "num_blocks": 8192, 00:24:19.299 "block_size": 4096, 00:24:19.299 "physical_block_size": 4096, 00:24:19.299 "uuid": "2ef68aa8-29ce-45e7-bdcf-8a738eac612c", 00:24:19.299 "optimal_io_boundary": 0, 00:24:19.299 "md_size": 0, 00:24:19.299 "dif_type": 0, 00:24:19.299 "dif_is_head_of_md": false, 00:24:19.299 "dif_pi_format": 0 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "bdev_wait_for_examine" 00:24:19.299 } 00:24:19.299 ] 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "subsystem": "nbd", 00:24:19.299 "config": [] 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "subsystem": "scheduler", 00:24:19.299 "config": [ 00:24:19.299 { 00:24:19.299 "method": "framework_set_scheduler", 00:24:19.299 "params": { 00:24:19.299 "name": "static" 00:24:19.299 } 00:24:19.299 } 00:24:19.299 ] 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "subsystem": "nvmf", 00:24:19.299 "config": [ 00:24:19.299 { 00:24:19.299 "method": "nvmf_set_config", 00:24:19.299 "params": { 00:24:19.299 "discovery_filter": "match_any", 00:24:19.299 "admin_cmd_passthru": { 00:24:19.299 "identify_ctrlr": false 00:24:19.299 }, 00:24:19.299 "dhchap_digests": [ 00:24:19.299 "sha256", 00:24:19.299 "sha384", 00:24:19.299 "sha512" 00:24:19.299 ], 00:24:19.299 "dhchap_dhgroups": [ 00:24:19.299 "null", 00:24:19.299 "ffdhe2048", 00:24:19.299 "ffdhe3072", 00:24:19.299 "ffdhe4096", 00:24:19.299 "ffdhe6144", 00:24:19.299 "ffdhe8192" 00:24:19.299 ] 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_set_max_subsystems", 00:24:19.299 "params": { 00:24:19.299 "max_subsystems": 1024 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_set_crdt", 00:24:19.299 "params": { 00:24:19.299 "crdt1": 0, 00:24:19.299 "crdt2": 0, 00:24:19.299 "crdt3": 0 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_create_transport", 00:24:19.299 "params": { 00:24:19.299 "trtype": "TCP", 00:24:19.299 "max_queue_depth": 128, 00:24:19.299 "max_io_qpairs_per_ctrlr": 127, 00:24:19.299 "in_capsule_data_size": 4096, 00:24:19.299 "max_io_size": 131072, 00:24:19.299 "io_unit_size": 131072, 00:24:19.299 "max_aq_depth": 128, 00:24:19.299 "num_shared_buffers": 511, 00:24:19.299 "buf_cache_size": 4294967295, 00:24:19.299 "dif_insert_or_strip": false, 00:24:19.299 "zcopy": false, 00:24:19.299 "c2h_success": false, 00:24:19.299 "sock_priority": 0, 00:24:19.299 "abort_timeout_sec": 1, 00:24:19.299 "ack_timeout": 0, 00:24:19.299 "data_wr_pool_size": 0 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_create_subsystem", 00:24:19.299 "params": { 00:24:19.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.299 "allow_any_host": false, 00:24:19.299 "serial_number": "SPDK00000000000001", 00:24:19.299 "model_number": "SPDK bdev Controller", 00:24:19.299 "max_namespaces": 10, 00:24:19.299 "min_cntlid": 1, 00:24:19.299 "max_cntlid": 65519, 00:24:19.299 "ana_reporting": false 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_subsystem_add_host", 00:24:19.299 "params": { 00:24:19.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.299 "host": "nqn.2016-06.io.spdk:host1", 00:24:19.299 "psk": "key0" 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_subsystem_add_ns", 00:24:19.299 "params": { 00:24:19.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.299 "namespace": { 00:24:19.299 "nsid": 1, 00:24:19.299 "bdev_name": "malloc0", 00:24:19.299 "nguid": "2EF68AA829CE45E7BDCF8A738EAC612C", 00:24:19.299 "uuid": "2ef68aa8-29ce-45e7-bdcf-8a738eac612c", 00:24:19.299 "no_auto_visible": false 00:24:19.299 } 00:24:19.299 } 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "method": "nvmf_subsystem_add_listener", 00:24:19.299 "params": { 00:24:19.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.299 "listen_address": { 00:24:19.299 "trtype": "TCP", 00:24:19.299 "adrfam": "IPv4", 00:24:19.299 "traddr": "10.0.0.2", 00:24:19.299 "trsvcid": "4420" 00:24:19.299 }, 00:24:19.299 "secure_channel": true 00:24:19.299 } 00:24:19.299 } 00:24:19.300 ] 00:24:19.300 } 00:24:19.300 ] 00:24:19.300 }' 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=940788 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 940788 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 940788 ']' 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.300 01:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.563 [2024-10-01 01:41:59.193815] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:19.563 [2024-10-01 01:41:59.193913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.563 [2024-10-01 01:41:59.265116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.563 [2024-10-01 01:41:59.353741] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.563 [2024-10-01 01:41:59.353812] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.563 [2024-10-01 01:41:59.353839] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.563 [2024-10-01 01:41:59.353852] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.563 [2024-10-01 01:41:59.353864] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.563 [2024-10-01 01:41:59.353956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.822 [2024-10-01 01:41:59.612572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.822 [2024-10-01 01:41:59.644579] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.822 [2024-10-01 01:41:59.644887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=940963 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 940963 /var/tmp/bdevperf.sock 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 940963 ']' 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.388 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:20.388 "subsystems": [ 00:24:20.388 { 00:24:20.388 "subsystem": "keyring", 00:24:20.388 "config": [ 00:24:20.388 { 00:24:20.388 "method": "keyring_file_add_key", 00:24:20.388 "params": { 00:24:20.388 "name": "key0", 00:24:20.388 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:20.388 } 00:24:20.388 } 00:24:20.388 ] 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "subsystem": "iobuf", 00:24:20.388 "config": [ 00:24:20.388 { 00:24:20.388 "method": "iobuf_set_options", 00:24:20.388 "params": { 00:24:20.388 "small_pool_count": 8192, 00:24:20.388 "large_pool_count": 1024, 00:24:20.388 "small_bufsize": 8192, 00:24:20.388 "large_bufsize": 135168 00:24:20.388 } 00:24:20.388 } 00:24:20.388 ] 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "subsystem": "sock", 00:24:20.388 "config": [ 00:24:20.388 { 00:24:20.388 "method": "sock_set_default_impl", 00:24:20.388 "params": { 00:24:20.388 "impl_name": "posix" 00:24:20.388 } 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "method": "sock_impl_set_options", 00:24:20.388 "params": { 00:24:20.388 "impl_name": "ssl", 00:24:20.388 "recv_buf_size": 4096, 00:24:20.388 "send_buf_size": 4096, 00:24:20.388 "enable_recv_pipe": true, 00:24:20.388 "enable_quickack": false, 00:24:20.388 "enable_placement_id": 0, 00:24:20.388 "enable_zerocopy_send_server": true, 00:24:20.388 "enable_zerocopy_send_client": false, 00:24:20.388 "zerocopy_threshold": 0, 00:24:20.388 "tls_version": 0, 00:24:20.388 "enable_ktls": false 00:24:20.388 } 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "method": "sock_impl_set_options", 00:24:20.388 "params": { 00:24:20.388 "impl_name": "posix", 00:24:20.388 "recv_buf_size": 2097152, 00:24:20.388 "send_buf_size": 2097152, 00:24:20.388 "enable_recv_pipe": true, 00:24:20.388 "enable_quickack": false, 00:24:20.388 "enable_placement_id": 0, 00:24:20.388 "enable_zerocopy_send_server": true, 00:24:20.388 "enable_zerocopy_send_client": false, 00:24:20.388 "zerocopy_threshold": 0, 00:24:20.388 "tls_version": 0, 00:24:20.388 "enable_ktls": false 00:24:20.388 } 00:24:20.388 } 00:24:20.388 ] 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "subsystem": "vmd", 00:24:20.388 "config": [] 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "subsystem": "accel", 00:24:20.388 "config": [ 00:24:20.388 { 00:24:20.388 "method": "accel_set_options", 00:24:20.388 "params": { 00:24:20.388 "small_cache_size": 128, 00:24:20.388 "large_cache_size": 16, 00:24:20.388 "task_count": 2048, 00:24:20.388 "sequence_count": 2048, 00:24:20.388 "buf_count": 2048 00:24:20.388 } 00:24:20.388 } 00:24:20.388 ] 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "subsystem": "bdev", 00:24:20.388 "config": [ 00:24:20.388 { 00:24:20.388 "method": "bdev_set_options", 00:24:20.388 "params": { 00:24:20.388 "bdev_io_pool_size": 65535, 00:24:20.388 "bdev_io_cache_size": 256, 00:24:20.388 "bdev_auto_examine": true, 00:24:20.388 "iobuf_small_cache_size": 128, 00:24:20.388 "iobuf_large_cache_size": 16 00:24:20.388 } 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "method": "bdev_raid_set_options", 00:24:20.388 "params": { 00:24:20.388 "process_window_size_kb": 1024, 00:24:20.388 "process_max_bandwidth_mb_sec": 0 00:24:20.388 } 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "method": "bdev_iscsi_set_options", 00:24:20.388 "params": { 00:24:20.388 "timeout_sec": 30 00:24:20.388 } 00:24:20.388 }, 00:24:20.388 { 00:24:20.388 "method": "bdev_nvme_set_options", 00:24:20.388 "params": { 00:24:20.388 "action_on_timeout": "none", 00:24:20.388 "timeout_us": 0, 00:24:20.388 "timeout_admin_us": 0, 00:24:20.388 "keep_alive_timeout_ms": 10000, 00:24:20.388 "arbitration_burst": 0, 00:24:20.388 "low_priority_weight": 0, 00:24:20.388 "medium_priority_weight": 0, 00:24:20.388 "high_priority_weight": 0, 00:24:20.388 "nvme_adminq_poll_period_us": 10000, 00:24:20.388 "nvme_ioq_poll_period_us": 0, 00:24:20.388 "io_queue_requests": 512, 00:24:20.388 "delay_cmd_submit": true, 00:24:20.388 "transport_retry_count": 4, 00:24:20.388 "bdev_retry_count": 3, 00:24:20.389 "transport_ack_timeout": 0, 00:24:20.389 "ctrlr_loss_timeout_sec": 0, 00:24:20.389 "reconnect_delay_sec": 0, 00:24:20.389 "fast_io_fail_timeout_sec": 0, 00:24:20.389 "disable_auto_failback": false, 00:24:20.389 "generate_uuids": false, 00:24:20.389 "transport_tos": 0, 00:24:20.389 "nvme_error_stat": false, 00:24:20.389 "rdma_srq_size": 0, 00:24:20.389 "io_path_stat": false, 00:24:20.389 "allow_accel_sequence": false, 00:24:20.389 "rdma_max_cq_size": 0, 00:24:20.389 "rdma_cm_event_timeout_ms": 0, 00:24:20.389 "dhchap_digests": [ 00:24:20.389 "sha256", 00:24:20.389 "sha384", 00:24:20.389 "sha512" 00:24:20.389 ], 00:24:20.389 "dhchap_dhgroups": [ 00:24:20.389 "null", 00:24:20.389 "ffdhe2048", 00:24:20.389 "ffdhe3072", 00:24:20.389 "ffdhe4096", 00:24:20.389 "ffdhe6144", 00:24:20.389 "ffdhe8192" 00:24:20.389 ] 00:24:20.389 } 00:24:20.389 }, 00:24:20.389 { 00:24:20.389 "method": "bdev_nvme_attach_controller", 00:24:20.389 "params": { 00:24:20.389 "name": "TLSTEST", 00:24:20.389 "trtype": "TCP", 00:24:20.389 "adrfam": "IPv4", 00:24:20.389 "traddr": "10.0.0.2", 00:24:20.389 "trsvcid": "4420", 00:24:20.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.389 "prchk_reftag": false, 00:24:20.389 "prchk_guard": false, 00:24:20.389 "ctrlr_loss_timeout_sec": 0, 00:24:20.389 "reconnect_delay_sec": 0, 00:24:20.389 "fast_io_fail_timeout_sec": 0, 00:24:20.389 "psk": "key0", 00:24:20.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.389 "hdgst": false, 00:24:20.389 "ddgst": false 00:24:20.389 } 00:24:20.389 }, 00:24:20.389 { 00:24:20.389 "method": "bdev_nvme_set_hotplug", 00:24:20.389 "params": { 00:24:20.389 "period_us": 100000, 00:24:20.389 "enable": false 00:24:20.389 } 00:24:20.389 }, 00:24:20.389 { 00:24:20.389 "method": "bdev_wait_for_examine" 00:24:20.389 } 00:24:20.389 ] 00:24:20.389 }, 00:24:20.389 { 00:24:20.389 "subsystem": "nbd", 00:24:20.389 "config": [] 00:24:20.389 } 00:24:20.389 ] 00:24:20.389 }' 00:24:20.389 01:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.646 [2024-10-01 01:42:00.251551] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:20.646 [2024-10-01 01:42:00.251649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940963 ] 00:24:20.646 [2024-10-01 01:42:00.316395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.646 [2024-10-01 01:42:00.412047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.904 [2024-10-01 01:42:00.591332] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.468 01:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.468 01:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:21.468 01:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:21.724 Running I/O for 10 seconds... 00:24:31.972 2934.00 IOPS, 11.46 MiB/s 2948.50 IOPS, 11.52 MiB/s 2917.33 IOPS, 11.40 MiB/s 2936.50 IOPS, 11.47 MiB/s 2944.00 IOPS, 11.50 MiB/s 2955.50 IOPS, 11.54 MiB/s 2953.43 IOPS, 11.54 MiB/s 2943.50 IOPS, 11.50 MiB/s 2945.56 IOPS, 11.51 MiB/s 2939.40 IOPS, 11.48 MiB/s 00:24:31.972 Latency(us) 00:24:31.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.972 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.972 Verification LBA range: start 0x0 length 0x2000 00:24:31.972 TLSTESTn1 : 10.03 2944.14 11.50 0.00 0.00 43402.62 6116.69 63691.28 00:24:31.972 =================================================================================================================== 00:24:31.972 Total : 2944.14 11.50 0.00 0.00 43402.62 6116.69 63691.28 00:24:31.972 { 00:24:31.972 "results": [ 00:24:31.972 { 00:24:31.972 "job": "TLSTESTn1", 00:24:31.972 "core_mask": "0x4", 00:24:31.972 "workload": "verify", 00:24:31.972 "status": "finished", 00:24:31.972 "verify_range": { 00:24:31.972 "start": 0, 00:24:31.972 "length": 8192 00:24:31.972 }, 00:24:31.972 "queue_depth": 128, 00:24:31.972 "io_size": 4096, 00:24:31.972 "runtime": 10.026701, 00:24:31.972 "iops": 2944.138854843682, 00:24:31.972 "mibps": 11.500542401733133, 00:24:31.972 "io_failed": 0, 00:24:31.972 "io_timeout": 0, 00:24:31.972 "avg_latency_us": 43402.61586750978, 00:24:31.972 "min_latency_us": 6116.693333333334, 00:24:31.972 "max_latency_us": 63691.28296296296 00:24:31.972 } 00:24:31.972 ], 00:24:31.972 "core_count": 1 00:24:31.972 } 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 940963 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 940963 ']' 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 940963 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 940963 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 940963' 00:24:31.972 killing process with pid 940963 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 940963 00:24:31.972 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.972 00:24:31.972 Latency(us) 00:24:31.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.972 =================================================================================================================== 00:24:31.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 940963 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 940788 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 940788 ']' 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 940788 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 940788 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 940788' 00:24:31.972 killing process with pid 940788 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 940788 00:24:31.972 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 940788 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=942888 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 942888 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 942888 ']' 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.230 01:42:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.230 [2024-10-01 01:42:12.052553] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:32.230 [2024-10-01 01:42:12.052637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.489 [2024-10-01 01:42:12.135119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.489 [2024-10-01 01:42:12.225462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.489 [2024-10-01 01:42:12.225531] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.489 [2024-10-01 01:42:12.225548] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.489 [2024-10-01 01:42:12.225562] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.489 [2024-10-01 01:42:12.225573] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.489 [2024-10-01 01:42:12.225604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.F4dj8o2U6Z 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F4dj8o2U6Z 00:24:32.748 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:33.007 [2024-10-01 01:42:12.626753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.007 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:33.265 01:42:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:33.524 [2024-10-01 01:42:13.224408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:33.524 [2024-10-01 01:42:13.224671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.524 01:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:33.783 malloc0 00:24:33.783 01:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:34.042 01:42:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:34.301 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.559 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=943214 00:24:34.559 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:34.559 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:34.559 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 943214 /var/tmp/bdevperf.sock 00:24:34.559 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 943214 ']' 00:24:34.560 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.560 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.560 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.560 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.560 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.560 [2024-10-01 01:42:14.379206] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:34.560 [2024-10-01 01:42:14.379304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943214 ] 00:24:34.818 [2024-10-01 01:42:14.444024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.818 [2024-10-01 01:42:14.535249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.818 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.818 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:34.818 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:35.076 01:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:35.334 [2024-10-01 01:42:15.148349] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:35.592 nvme0n1 00:24:35.592 01:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.592 Running I/O for 1 seconds... 00:24:36.783 3296.00 IOPS, 12.88 MiB/s 00:24:36.783 Latency(us) 00:24:36.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.783 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:36.783 Verification LBA range: start 0x0 length 0x2000 00:24:36.783 nvme0n1 : 1.02 3343.11 13.06 0.00 0.00 37862.61 8786.68 49321.91 00:24:36.783 =================================================================================================================== 00:24:36.783 Total : 3343.11 13.06 0.00 0.00 37862.61 8786.68 49321.91 00:24:36.783 { 00:24:36.783 "results": [ 00:24:36.783 { 00:24:36.783 "job": "nvme0n1", 00:24:36.783 "core_mask": "0x2", 00:24:36.783 "workload": "verify", 00:24:36.783 "status": "finished", 00:24:36.783 "verify_range": { 00:24:36.783 "start": 0, 00:24:36.783 "length": 8192 00:24:36.783 }, 00:24:36.783 "queue_depth": 128, 00:24:36.783 "io_size": 4096, 00:24:36.783 "runtime": 1.024195, 00:24:36.783 "iops": 3343.1133719652994, 00:24:36.783 "mibps": 13.05903660923945, 00:24:36.783 "io_failed": 0, 00:24:36.783 "io_timeout": 0, 00:24:36.783 "avg_latency_us": 37862.60735202493, 00:24:36.783 "min_latency_us": 8786.678518518518, 00:24:36.783 "max_latency_us": 49321.90814814815 00:24:36.783 } 00:24:36.783 ], 00:24:36.783 "core_count": 1 00:24:36.783 } 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 943214 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 943214 ']' 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 943214 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 943214 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 943214' 00:24:36.783 killing process with pid 943214 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 943214 00:24:36.783 Received shutdown signal, test time was about 1.000000 seconds 00:24:36.783 00:24:36.783 Latency(us) 00:24:36.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.783 =================================================================================================================== 00:24:36.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.783 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 943214 00:24:37.041 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 942888 00:24:37.041 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 942888 ']' 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 942888 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 942888 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 942888' 00:24:37.042 killing process with pid 942888 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 942888 00:24:37.042 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 942888 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=943573 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 943573 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 943573 ']' 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.300 01:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.300 [2024-10-01 01:42:16.973212] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:37.300 [2024-10-01 01:42:16.973319] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.300 [2024-10-01 01:42:17.041222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.300 [2024-10-01 01:42:17.129517] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.300 [2024-10-01 01:42:17.129591] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.300 [2024-10-01 01:42:17.129605] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.301 [2024-10-01 01:42:17.129616] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.301 [2024-10-01 01:42:17.129625] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.301 [2024-10-01 01:42:17.129658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 [2024-10-01 01:42:17.277721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.559 malloc0 00:24:37.559 [2024-10-01 01:42:17.321190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.559 [2024-10-01 01:42:17.321517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=943601 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 943601 /var/tmp/bdevperf.sock 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 943601 ']' 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.559 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 [2024-10-01 01:42:17.401475] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:37.559 [2024-10-01 01:42:17.401555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943601 ] 00:24:37.817 [2024-10-01 01:42:17.469129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.817 [2024-10-01 01:42:17.560476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.075 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.075 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:38.075 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F4dj8o2U6Z 00:24:38.332 01:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:38.590 [2024-10-01 01:42:18.237197] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.590 nvme0n1 00:24:38.590 01:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.590 Running I/O for 1 seconds... 00:24:39.965 3146.00 IOPS, 12.29 MiB/s 00:24:39.965 Latency(us) 00:24:39.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.965 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:39.965 Verification LBA range: start 0x0 length 0x2000 00:24:39.965 nvme0n1 : 1.02 3196.17 12.49 0.00 0.00 39599.94 9126.49 54758.97 00:24:39.965 =================================================================================================================== 00:24:39.965 Total : 3196.17 12.49 0.00 0.00 39599.94 9126.49 54758.97 00:24:39.965 { 00:24:39.965 "results": [ 00:24:39.965 { 00:24:39.965 "job": "nvme0n1", 00:24:39.965 "core_mask": "0x2", 00:24:39.965 "workload": "verify", 00:24:39.965 "status": "finished", 00:24:39.965 "verify_range": { 00:24:39.965 "start": 0, 00:24:39.965 "length": 8192 00:24:39.965 }, 00:24:39.965 "queue_depth": 128, 00:24:39.965 "io_size": 4096, 00:24:39.965 "runtime": 1.024665, 00:24:39.965 "iops": 3196.1665519950425, 00:24:39.965 "mibps": 12.485025593730635, 00:24:39.965 "io_failed": 0, 00:24:39.965 "io_timeout": 0, 00:24:39.965 "avg_latency_us": 39599.93808085949, 00:24:39.965 "min_latency_us": 9126.494814814814, 00:24:39.965 "max_latency_us": 54758.96888888889 00:24:39.965 } 00:24:39.965 ], 00:24:39.965 "core_count": 1 00:24:39.965 } 00:24:39.965 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:39.965 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.965 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.965 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.965 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:39.965 "subsystems": [ 00:24:39.965 { 00:24:39.965 "subsystem": "keyring", 00:24:39.965 "config": [ 00:24:39.965 { 00:24:39.965 "method": "keyring_file_add_key", 00:24:39.965 "params": { 00:24:39.965 "name": "key0", 00:24:39.965 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:39.965 } 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "subsystem": "iobuf", 00:24:39.965 "config": [ 00:24:39.965 { 00:24:39.965 "method": "iobuf_set_options", 00:24:39.965 "params": { 00:24:39.965 "small_pool_count": 8192, 00:24:39.965 "large_pool_count": 1024, 00:24:39.965 "small_bufsize": 8192, 00:24:39.965 "large_bufsize": 135168 00:24:39.965 } 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "subsystem": "sock", 00:24:39.965 "config": [ 00:24:39.965 { 00:24:39.965 "method": "sock_set_default_impl", 00:24:39.965 "params": { 00:24:39.965 "impl_name": "posix" 00:24:39.965 } 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "method": "sock_impl_set_options", 00:24:39.965 "params": { 00:24:39.965 "impl_name": "ssl", 00:24:39.965 "recv_buf_size": 4096, 00:24:39.965 "send_buf_size": 4096, 00:24:39.965 "enable_recv_pipe": true, 00:24:39.965 "enable_quickack": false, 00:24:39.965 "enable_placement_id": 0, 00:24:39.965 "enable_zerocopy_send_server": true, 00:24:39.965 "enable_zerocopy_send_client": false, 00:24:39.965 "zerocopy_threshold": 0, 00:24:39.965 "tls_version": 0, 00:24:39.965 "enable_ktls": false 00:24:39.965 } 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "method": "sock_impl_set_options", 00:24:39.965 "params": { 00:24:39.965 "impl_name": "posix", 00:24:39.965 "recv_buf_size": 2097152, 00:24:39.965 "send_buf_size": 2097152, 00:24:39.965 "enable_recv_pipe": true, 00:24:39.965 "enable_quickack": false, 00:24:39.965 "enable_placement_id": 0, 00:24:39.965 "enable_zerocopy_send_server": true, 00:24:39.965 "enable_zerocopy_send_client": false, 00:24:39.965 "zerocopy_threshold": 0, 00:24:39.965 "tls_version": 0, 00:24:39.965 "enable_ktls": false 00:24:39.965 } 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "subsystem": "vmd", 00:24:39.965 "config": [] 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "subsystem": "accel", 00:24:39.965 "config": [ 00:24:39.965 { 00:24:39.965 "method": "accel_set_options", 00:24:39.965 "params": { 00:24:39.965 "small_cache_size": 128, 00:24:39.965 "large_cache_size": 16, 00:24:39.965 "task_count": 2048, 00:24:39.965 "sequence_count": 2048, 00:24:39.965 "buf_count": 2048 00:24:39.965 } 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "subsystem": "bdev", 00:24:39.965 "config": [ 00:24:39.965 { 00:24:39.965 "method": "bdev_set_options", 00:24:39.965 "params": { 00:24:39.965 "bdev_io_pool_size": 65535, 00:24:39.965 "bdev_io_cache_size": 256, 00:24:39.965 "bdev_auto_examine": true, 00:24:39.966 "iobuf_small_cache_size": 128, 00:24:39.966 "iobuf_large_cache_size": 16 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "bdev_raid_set_options", 00:24:39.966 "params": { 00:24:39.966 "process_window_size_kb": 1024, 00:24:39.966 "process_max_bandwidth_mb_sec": 0 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "bdev_iscsi_set_options", 00:24:39.966 "params": { 00:24:39.966 "timeout_sec": 30 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "bdev_nvme_set_options", 00:24:39.966 "params": { 00:24:39.966 "action_on_timeout": "none", 00:24:39.966 "timeout_us": 0, 00:24:39.966 "timeout_admin_us": 0, 00:24:39.966 "keep_alive_timeout_ms": 10000, 00:24:39.966 "arbitration_burst": 0, 00:24:39.966 "low_priority_weight": 0, 00:24:39.966 "medium_priority_weight": 0, 00:24:39.966 "high_priority_weight": 0, 00:24:39.966 "nvme_adminq_poll_period_us": 10000, 00:24:39.966 "nvme_ioq_poll_period_us": 0, 00:24:39.966 "io_queue_requests": 0, 00:24:39.966 "delay_cmd_submit": true, 00:24:39.966 "transport_retry_count": 4, 00:24:39.966 "bdev_retry_count": 3, 00:24:39.966 "transport_ack_timeout": 0, 00:24:39.966 "ctrlr_loss_timeout_sec": 0, 00:24:39.966 "reconnect_delay_sec": 0, 00:24:39.966 "fast_io_fail_timeout_sec": 0, 00:24:39.966 "disable_auto_failback": false, 00:24:39.966 "generate_uuids": false, 00:24:39.966 "transport_tos": 0, 00:24:39.966 "nvme_error_stat": false, 00:24:39.966 "rdma_srq_size": 0, 00:24:39.966 "io_path_stat": false, 00:24:39.966 "allow_accel_sequence": false, 00:24:39.966 "rdma_max_cq_size": 0, 00:24:39.966 "rdma_cm_event_timeout_ms": 0, 00:24:39.966 "dhchap_digests": [ 00:24:39.966 "sha256", 00:24:39.966 "sha384", 00:24:39.966 "sha512" 00:24:39.966 ], 00:24:39.966 "dhchap_dhgroups": [ 00:24:39.966 "null", 00:24:39.966 "ffdhe2048", 00:24:39.966 "ffdhe3072", 00:24:39.966 "ffdhe4096", 00:24:39.966 "ffdhe6144", 00:24:39.966 "ffdhe8192" 00:24:39.966 ] 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "bdev_nvme_set_hotplug", 00:24:39.966 "params": { 00:24:39.966 "period_us": 100000, 00:24:39.966 "enable": false 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "bdev_malloc_create", 00:24:39.966 "params": { 00:24:39.966 "name": "malloc0", 00:24:39.966 "num_blocks": 8192, 00:24:39.966 "block_size": 4096, 00:24:39.966 "physical_block_size": 4096, 00:24:39.966 "uuid": "20c9594e-5a2b-475e-b8e0-d820fe4d5deb", 00:24:39.966 "optimal_io_boundary": 0, 00:24:39.966 "md_size": 0, 00:24:39.966 "dif_type": 0, 00:24:39.966 "dif_is_head_of_md": false, 00:24:39.966 "dif_pi_format": 0 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "bdev_wait_for_examine" 00:24:39.966 } 00:24:39.966 ] 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "subsystem": "nbd", 00:24:39.966 "config": [] 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "subsystem": "scheduler", 00:24:39.966 "config": [ 00:24:39.966 { 00:24:39.966 "method": "framework_set_scheduler", 00:24:39.966 "params": { 00:24:39.966 "name": "static" 00:24:39.966 } 00:24:39.966 } 00:24:39.966 ] 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "subsystem": "nvmf", 00:24:39.966 "config": [ 00:24:39.966 { 00:24:39.966 "method": "nvmf_set_config", 00:24:39.966 "params": { 00:24:39.966 "discovery_filter": "match_any", 00:24:39.966 "admin_cmd_passthru": { 00:24:39.966 "identify_ctrlr": false 00:24:39.966 }, 00:24:39.966 "dhchap_digests": [ 00:24:39.966 "sha256", 00:24:39.966 "sha384", 00:24:39.966 "sha512" 00:24:39.966 ], 00:24:39.966 "dhchap_dhgroups": [ 00:24:39.966 "null", 00:24:39.966 "ffdhe2048", 00:24:39.966 "ffdhe3072", 00:24:39.966 "ffdhe4096", 00:24:39.966 "ffdhe6144", 00:24:39.966 "ffdhe8192" 00:24:39.966 ] 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_set_max_subsystems", 00:24:39.966 "params": { 00:24:39.966 "max_subsystems": 1024 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_set_crdt", 00:24:39.966 "params": { 00:24:39.966 "crdt1": 0, 00:24:39.966 "crdt2": 0, 00:24:39.966 "crdt3": 0 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_create_transport", 00:24:39.966 "params": { 00:24:39.966 "trtype": "TCP", 00:24:39.966 "max_queue_depth": 128, 00:24:39.966 "max_io_qpairs_per_ctrlr": 127, 00:24:39.966 "in_capsule_data_size": 4096, 00:24:39.966 "max_io_size": 131072, 00:24:39.966 "io_unit_size": 131072, 00:24:39.966 "max_aq_depth": 128, 00:24:39.966 "num_shared_buffers": 511, 00:24:39.966 "buf_cache_size": 4294967295, 00:24:39.966 "dif_insert_or_strip": false, 00:24:39.966 "zcopy": false, 00:24:39.966 "c2h_success": false, 00:24:39.966 "sock_priority": 0, 00:24:39.966 "abort_timeout_sec": 1, 00:24:39.966 "ack_timeout": 0, 00:24:39.966 "data_wr_pool_size": 0 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_create_subsystem", 00:24:39.966 "params": { 00:24:39.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.966 "allow_any_host": false, 00:24:39.966 "serial_number": "00000000000000000000", 00:24:39.966 "model_number": "SPDK bdev Controller", 00:24:39.966 "max_namespaces": 32, 00:24:39.966 "min_cntlid": 1, 00:24:39.966 "max_cntlid": 65519, 00:24:39.966 "ana_reporting": false 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_subsystem_add_host", 00:24:39.966 "params": { 00:24:39.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.966 "host": "nqn.2016-06.io.spdk:host1", 00:24:39.966 "psk": "key0" 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_subsystem_add_ns", 00:24:39.966 "params": { 00:24:39.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.966 "namespace": { 00:24:39.966 "nsid": 1, 00:24:39.966 "bdev_name": "malloc0", 00:24:39.966 "nguid": "20C9594E5A2B475EB8E0D820FE4D5DEB", 00:24:39.966 "uuid": "20c9594e-5a2b-475e-b8e0-d820fe4d5deb", 00:24:39.966 "no_auto_visible": false 00:24:39.966 } 00:24:39.966 } 00:24:39.966 }, 00:24:39.966 { 00:24:39.966 "method": "nvmf_subsystem_add_listener", 00:24:39.966 "params": { 00:24:39.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.966 "listen_address": { 00:24:39.966 "trtype": "TCP", 00:24:39.966 "adrfam": "IPv4", 00:24:39.966 "traddr": "10.0.0.2", 00:24:39.966 "trsvcid": "4420" 00:24:39.966 }, 00:24:39.966 "secure_channel": false, 00:24:39.966 "sock_impl": "ssl" 00:24:39.966 } 00:24:39.966 } 00:24:39.966 ] 00:24:39.966 } 00:24:39.966 ] 00:24:39.966 }' 00:24:39.966 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:40.224 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:40.224 "subsystems": [ 00:24:40.224 { 00:24:40.224 "subsystem": "keyring", 00:24:40.224 "config": [ 00:24:40.224 { 00:24:40.224 "method": "keyring_file_add_key", 00:24:40.224 "params": { 00:24:40.224 "name": "key0", 00:24:40.224 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:40.224 } 00:24:40.224 } 00:24:40.224 ] 00:24:40.224 }, 00:24:40.224 { 00:24:40.224 "subsystem": "iobuf", 00:24:40.224 "config": [ 00:24:40.224 { 00:24:40.224 "method": "iobuf_set_options", 00:24:40.224 "params": { 00:24:40.224 "small_pool_count": 8192, 00:24:40.224 "large_pool_count": 1024, 00:24:40.224 "small_bufsize": 8192, 00:24:40.224 "large_bufsize": 135168 00:24:40.224 } 00:24:40.224 } 00:24:40.224 ] 00:24:40.224 }, 00:24:40.224 { 00:24:40.224 "subsystem": "sock", 00:24:40.224 "config": [ 00:24:40.224 { 00:24:40.224 "method": "sock_set_default_impl", 00:24:40.224 "params": { 00:24:40.224 "impl_name": "posix" 00:24:40.224 } 00:24:40.224 }, 00:24:40.224 { 00:24:40.224 "method": "sock_impl_set_options", 00:24:40.224 "params": { 00:24:40.224 "impl_name": "ssl", 00:24:40.224 "recv_buf_size": 4096, 00:24:40.224 "send_buf_size": 4096, 00:24:40.224 "enable_recv_pipe": true, 00:24:40.224 "enable_quickack": false, 00:24:40.224 "enable_placement_id": 0, 00:24:40.224 "enable_zerocopy_send_server": true, 00:24:40.224 "enable_zerocopy_send_client": false, 00:24:40.224 "zerocopy_threshold": 0, 00:24:40.224 "tls_version": 0, 00:24:40.224 "enable_ktls": false 00:24:40.224 } 00:24:40.224 }, 00:24:40.224 { 00:24:40.224 "method": "sock_impl_set_options", 00:24:40.224 "params": { 00:24:40.224 "impl_name": "posix", 00:24:40.224 "recv_buf_size": 2097152, 00:24:40.224 "send_buf_size": 2097152, 00:24:40.224 "enable_recv_pipe": true, 00:24:40.224 "enable_quickack": false, 00:24:40.224 "enable_placement_id": 0, 00:24:40.224 "enable_zerocopy_send_server": true, 00:24:40.224 "enable_zerocopy_send_client": false, 00:24:40.224 "zerocopy_threshold": 0, 00:24:40.224 "tls_version": 0, 00:24:40.224 "enable_ktls": false 00:24:40.224 } 00:24:40.224 } 00:24:40.224 ] 00:24:40.224 }, 00:24:40.224 { 00:24:40.224 "subsystem": "vmd", 00:24:40.224 "config": [] 00:24:40.224 }, 00:24:40.225 { 00:24:40.225 "subsystem": "accel", 00:24:40.225 "config": [ 00:24:40.225 { 00:24:40.225 "method": "accel_set_options", 00:24:40.225 "params": { 00:24:40.225 "small_cache_size": 128, 00:24:40.225 "large_cache_size": 16, 00:24:40.225 "task_count": 2048, 00:24:40.225 "sequence_count": 2048, 00:24:40.225 "buf_count": 2048 00:24:40.225 } 00:24:40.225 } 00:24:40.225 ] 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "subsystem": "bdev", 00:24:40.225 "config": [ 00:24:40.225 { 00:24:40.225 "method": "bdev_set_options", 00:24:40.225 "params": { 00:24:40.225 "bdev_io_pool_size": 65535, 00:24:40.225 "bdev_io_cache_size": 256, 00:24:40.225 "bdev_auto_examine": true, 00:24:40.225 "iobuf_small_cache_size": 128, 00:24:40.225 "iobuf_large_cache_size": 16 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_raid_set_options", 00:24:40.225 "params": { 00:24:40.225 "process_window_size_kb": 1024, 00:24:40.225 "process_max_bandwidth_mb_sec": 0 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_iscsi_set_options", 00:24:40.225 "params": { 00:24:40.225 "timeout_sec": 30 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_nvme_set_options", 00:24:40.225 "params": { 00:24:40.225 "action_on_timeout": "none", 00:24:40.225 "timeout_us": 0, 00:24:40.225 "timeout_admin_us": 0, 00:24:40.225 "keep_alive_timeout_ms": 10000, 00:24:40.225 "arbitration_burst": 0, 00:24:40.225 "low_priority_weight": 0, 00:24:40.225 "medium_priority_weight": 0, 00:24:40.225 "high_priority_weight": 0, 00:24:40.225 "nvme_adminq_poll_period_us": 10000, 00:24:40.225 "nvme_ioq_poll_period_us": 0, 00:24:40.225 "io_queue_requests": 512, 00:24:40.225 "delay_cmd_submit": true, 00:24:40.225 "transport_retry_count": 4, 00:24:40.225 "bdev_retry_count": 3, 00:24:40.225 "transport_ack_timeout": 0, 00:24:40.225 "ctrlr_loss_timeout_sec": 0, 00:24:40.225 "reconnect_delay_sec": 0, 00:24:40.225 "fast_io_fail_timeout_sec": 0, 00:24:40.225 "disable_auto_failback": false, 00:24:40.225 "generate_uuids": false, 00:24:40.225 "transport_tos": 0, 00:24:40.225 "nvme_error_stat": false, 00:24:40.225 "rdma_srq_size": 0, 00:24:40.225 "io_path_stat": false, 00:24:40.225 "allow_accel_sequence": false, 00:24:40.225 "rdma_max_cq_size": 0, 00:24:40.225 "rdma_cm_event_timeout_ms": 0, 00:24:40.225 "dhchap_digests": [ 00:24:40.225 "sha256", 00:24:40.225 "sha384", 00:24:40.225 "sha512" 00:24:40.225 ], 00:24:40.225 "dhchap_dhgroups": [ 00:24:40.225 "null", 00:24:40.225 "ffdhe2048", 00:24:40.225 "ffdhe3072", 00:24:40.225 "ffdhe4096", 00:24:40.225 "ffdhe6144", 00:24:40.225 "ffdhe8192" 00:24:40.225 ] 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_nvme_attach_controller", 00:24:40.225 "params": { 00:24:40.225 "name": "nvme0", 00:24:40.225 "trtype": "TCP", 00:24:40.225 "adrfam": "IPv4", 00:24:40.225 "traddr": "10.0.0.2", 00:24:40.225 "trsvcid": "4420", 00:24:40.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.225 "prchk_reftag": false, 00:24:40.225 "prchk_guard": false, 00:24:40.225 "ctrlr_loss_timeout_sec": 0, 00:24:40.225 "reconnect_delay_sec": 0, 00:24:40.225 "fast_io_fail_timeout_sec": 0, 00:24:40.225 "psk": "key0", 00:24:40.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.225 "hdgst": false, 00:24:40.225 "ddgst": false 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_nvme_set_hotplug", 00:24:40.225 "params": { 00:24:40.225 "period_us": 100000, 00:24:40.225 "enable": false 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_enable_histogram", 00:24:40.225 "params": { 00:24:40.225 "name": "nvme0n1", 00:24:40.225 "enable": true 00:24:40.225 } 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "method": "bdev_wait_for_examine" 00:24:40.225 } 00:24:40.225 ] 00:24:40.225 }, 00:24:40.225 { 00:24:40.225 "subsystem": "nbd", 00:24:40.225 "config": [] 00:24:40.225 } 00:24:40.225 ] 00:24:40.225 }' 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 943601 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 943601 ']' 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 943601 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 943601 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 943601' 00:24:40.225 killing process with pid 943601 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 943601 00:24:40.225 Received shutdown signal, test time was about 1.000000 seconds 00:24:40.225 00:24:40.225 Latency(us) 00:24:40.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.225 =================================================================================================================== 00:24:40.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.225 01:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 943601 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 943573 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 943573 ']' 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 943573 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 943573 00:24:40.483 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.484 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.484 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 943573' 00:24:40.484 killing process with pid 943573 00:24:40.484 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 943573 00:24:40.484 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 943573 00:24:40.742 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:40.742 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:40.742 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:40.742 "subsystems": [ 00:24:40.742 { 00:24:40.742 "subsystem": "keyring", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "keyring_file_add_key", 00:24:40.742 "params": { 00:24:40.742 "name": "key0", 00:24:40.742 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:40.742 } 00:24:40.742 } 00:24:40.742 ] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "iobuf", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "iobuf_set_options", 00:24:40.742 "params": { 00:24:40.742 "small_pool_count": 8192, 00:24:40.742 "large_pool_count": 1024, 00:24:40.742 "small_bufsize": 8192, 00:24:40.742 "large_bufsize": 135168 00:24:40.742 } 00:24:40.742 } 00:24:40.742 ] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "sock", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "sock_set_default_impl", 00:24:40.742 "params": { 00:24:40.742 "impl_name": "posix" 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "sock_impl_set_options", 00:24:40.742 "params": { 00:24:40.742 "impl_name": "ssl", 00:24:40.742 "recv_buf_size": 4096, 00:24:40.742 "send_buf_size": 4096, 00:24:40.742 "enable_recv_pipe": true, 00:24:40.742 "enable_quickack": false, 00:24:40.742 "enable_placement_id": 0, 00:24:40.742 "enable_zerocopy_send_server": true, 00:24:40.742 "enable_zerocopy_send_client": false, 00:24:40.742 "zerocopy_threshold": 0, 00:24:40.742 "tls_version": 0, 00:24:40.742 "enable_ktls": false 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "sock_impl_set_options", 00:24:40.742 "params": { 00:24:40.742 "impl_name": "posix", 00:24:40.742 "recv_buf_size": 2097152, 00:24:40.742 "send_buf_size": 2097152, 00:24:40.742 "enable_recv_pipe": true, 00:24:40.742 "enable_quickack": false, 00:24:40.742 "enable_placement_id": 0, 00:24:40.742 "enable_zerocopy_send_server": true, 00:24:40.742 "enable_zerocopy_send_client": false, 00:24:40.742 "zerocopy_threshold": 0, 00:24:40.742 "tls_version": 0, 00:24:40.742 "enable_ktls": false 00:24:40.742 } 00:24:40.742 } 00:24:40.742 ] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "vmd", 00:24:40.742 "config": [] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "accel", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "accel_set_options", 00:24:40.742 "params": { 00:24:40.742 "small_cache_size": 128, 00:24:40.742 "large_cache_size": 16, 00:24:40.742 "task_count": 2048, 00:24:40.742 "sequence_count": 2048, 00:24:40.742 "buf_count": 2048 00:24:40.742 } 00:24:40.742 } 00:24:40.742 ] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "bdev", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "bdev_set_options", 00:24:40.742 "params": { 00:24:40.742 "bdev_io_pool_size": 65535, 00:24:40.742 "bdev_io_cache_size": 256, 00:24:40.742 "bdev_auto_examine": true, 00:24:40.742 "iobuf_small_cache_size": 128, 00:24:40.742 "iobuf_large_cache_size": 16 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "bdev_raid_set_options", 00:24:40.742 "params": { 00:24:40.742 "process_window_size_kb": 1024, 00:24:40.742 "process_max_bandwidth_mb_sec": 0 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "bdev_iscsi_set_options", 00:24:40.742 "params": { 00:24:40.742 "timeout_sec": 30 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "bdev_nvme_set_options", 00:24:40.742 "params": { 00:24:40.742 "action_on_timeout": "none", 00:24:40.742 "timeout_us": 0, 00:24:40.742 "timeout_admin_us": 0, 00:24:40.742 "keep_alive_timeout_ms": 10000, 00:24:40.742 "arbitration_burst": 0, 00:24:40.742 "low_priority_weight": 0, 00:24:40.742 "medium_priority_weight": 0, 00:24:40.742 "high_priority_weight": 0, 00:24:40.742 "nvme_adminq_poll_period_us": 10000, 00:24:40.742 "nvme_ioq_poll_period_us": 0, 00:24:40.742 "io_queue_requests": 0, 00:24:40.742 "delay_cmd_submit": true, 00:24:40.742 "transport_retry_count": 4, 00:24:40.742 "bdev_retry_count": 3, 00:24:40.742 "transport_ack_timeout": 0, 00:24:40.742 "ctrlr_loss_timeout_sec": 0, 00:24:40.742 "reconnect_delay_sec": 0, 00:24:40.742 "fast_io_fail_timeout_sec": 0, 00:24:40.742 "disable_auto_failback": false, 00:24:40.742 "generate_uuids": false, 00:24:40.742 "transport_tos": 0, 00:24:40.742 "nvme_error_stat": false, 00:24:40.742 "rdma_srq_size": 0, 00:24:40.742 "io_path_stat": false, 00:24:40.742 "allow_accel_sequence": false, 00:24:40.742 "rdma_max_cq_size": 0, 00:24:40.742 "rdma_cm_event_timeout_ms": 0, 00:24:40.742 "dhchap_digests": [ 00:24:40.742 "sha256", 00:24:40.742 "sha384", 00:24:40.742 "sha512" 00:24:40.742 ], 00:24:40.742 "dhchap_dhgroups": [ 00:24:40.742 "null", 00:24:40.742 "ffdhe2048", 00:24:40.742 "ffdhe3072", 00:24:40.742 "ffdhe4096", 00:24:40.742 "ffdhe6144", 00:24:40.742 "ffdhe8192" 00:24:40.742 ] 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "bdev_nvme_set_hotplug", 00:24:40.742 "params": { 00:24:40.742 "period_us": 100000, 00:24:40.742 "enable": false 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "bdev_malloc_create", 00:24:40.742 "params": { 00:24:40.742 "name": "malloc0", 00:24:40.742 "num_blocks": 8192, 00:24:40.742 "block_size": 4096, 00:24:40.742 "physical_block_size": 4096, 00:24:40.742 "uuid": "20c9594e-5a2b-475e-b8e0-d820fe4d5deb", 00:24:40.742 "optimal_io_boundary": 0, 00:24:40.742 "md_size": 0, 00:24:40.742 "dif_type": 0, 00:24:40.742 "dif_is_head_of_md": false, 00:24:40.742 "dif_pi_format": 0 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "bdev_wait_for_examine" 00:24:40.742 } 00:24:40.742 ] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "nbd", 00:24:40.742 "config": [] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "scheduler", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "framework_set_scheduler", 00:24:40.742 "params": { 00:24:40.742 "name": "static" 00:24:40.742 } 00:24:40.742 } 00:24:40.742 ] 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "subsystem": "nvmf", 00:24:40.742 "config": [ 00:24:40.742 { 00:24:40.742 "method": "nvmf_set_config", 00:24:40.742 "params": { 00:24:40.742 "discovery_filter": "match_any", 00:24:40.742 "admin_cmd_passthru": { 00:24:40.742 "identify_ctrlr": false 00:24:40.742 }, 00:24:40.742 "dhchap_digests": [ 00:24:40.742 "sha256", 00:24:40.742 "sha384", 00:24:40.742 "sha512" 00:24:40.742 ], 00:24:40.742 "dhchap_dhgroups": [ 00:24:40.742 "null", 00:24:40.742 "ffdhe2048", 00:24:40.742 "ffdhe3072", 00:24:40.742 "ffdhe4096", 00:24:40.742 "ffdhe6144", 00:24:40.742 "ffdhe8192" 00:24:40.742 ] 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "nvmf_set_max_subsystems", 00:24:40.742 "params": { 00:24:40.742 "max_subsystems": 1024 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "nvmf_set_crdt", 00:24:40.742 "params": { 00:24:40.742 "crdt1": 0, 00:24:40.742 "crdt2": 0, 00:24:40.742 "crdt3": 0 00:24:40.742 } 00:24:40.742 }, 00:24:40.742 { 00:24:40.742 "method": "nvmf_create_transport", 00:24:40.742 "params": { 00:24:40.742 "trtype": "TCP", 00:24:40.742 "max_queue_depth": 128, 00:24:40.742 "max_io_qpairs_per_ctrlr": 127, 00:24:40.742 "in_capsule_data_size": 4096, 00:24:40.742 "max_io_size": 131072, 00:24:40.742 "io_unit_size": 131072, 00:24:40.742 "max_aq_depth": 128, 00:24:40.742 "num_shared_buffers": 511, 00:24:40.743 "buf_cache_size": 4294967295, 00:24:40.743 "dif_insert_or_strip": false, 00:24:40.743 "zcopy": false, 00:24:40.743 "c2h_success": false, 00:24:40.743 "sock_priority": 0, 00:24:40.743 "abort_timeout_sec": 1, 00:24:40.743 "ack_timeout": 0, 00:24:40.743 "data_wr_pool_size": 0 00:24:40.743 } 00:24:40.743 }, 00:24:40.743 { 00:24:40.743 "method": "nvmf_create_subsystem", 00:24:40.743 "params": { 00:24:40.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.743 "allow_any_host": false, 00:24:40.743 "serial_number": "00000000000000000000", 00:24:40.743 "model_number": "SPDK bdev Controller", 00:24:40.743 "max_namespaces": 32, 00:24:40.743 "min_cntlid": 1, 00:24:40.743 "max_cntlid": 65519, 00:24:40.743 "ana_reporting": false 00:24:40.743 } 00:24:40.743 }, 00:24:40.743 { 00:24:40.743 "method": "nvmf_subsystem_add_host", 00:24:40.743 "params": { 00:24:40.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.743 "host": "nqn.2016-06.io.spdk:host1", 00:24:40.743 "psk": "key0" 00:24:40.743 } 00:24:40.743 }, 00:24:40.743 { 00:24:40.743 "method": "nvmf_subsystem_add_ns", 00:24:40.743 "params": { 00:24:40.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.743 "namespace": { 00:24:40.743 "nsid": 1, 00:24:40.743 "bdev_name": "malloc0", 00:24:40.743 "nguid": "20C9594E5A2B475EB8E0D820FE4D5DEB", 00:24:40.743 "uuid": "20c9594e-5a2b-475e-b8e0-d820fe4d5deb", 00:24:40.743 "no_auto_visible": false 00:24:40.743 } 00:24:40.743 } 00:24:40.743 }, 00:24:40.743 { 00:24:40.743 "method": "nvmf_subsystem_add_listener", 00:24:40.743 "params": { 00:24:40.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.743 "listen_address": { 00:24:40.743 "trtype": "TCP", 00:24:40.743 "adrfam": "IPv4", 00:24:40.743 "traddr": "10.0.0.2", 00:24:40.743 "trsvcid": "4420" 00:24:40.743 }, 00:24:40.743 "secure_channel": false, 00:24:40.743 "sock_impl": "ssl" 00:24:40.743 } 00:24:40.743 } 00:24:40.743 ] 00:24:40.743 } 00:24:40.743 ] 00:24:40.743 }' 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=944010 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 944010 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 944010 ']' 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.743 01:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.743 [2024-10-01 01:42:20.541239] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:40.743 [2024-10-01 01:42:20.541342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.001 [2024-10-01 01:42:20.611653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.001 [2024-10-01 01:42:20.700275] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.001 [2024-10-01 01:42:20.700346] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.001 [2024-10-01 01:42:20.700372] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.001 [2024-10-01 01:42:20.700386] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.001 [2024-10-01 01:42:20.700398] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.001 [2024-10-01 01:42:20.700485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.259 [2024-10-01 01:42:20.958789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.259 [2024-10-01 01:42:20.990818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.259 [2024-10-01 01:42:20.991126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=944160 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 944160 /var/tmp/bdevperf.sock 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 944160 ']' 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.824 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:41.824 "subsystems": [ 00:24:41.824 { 00:24:41.824 "subsystem": "keyring", 00:24:41.824 "config": [ 00:24:41.824 { 00:24:41.824 "method": "keyring_file_add_key", 00:24:41.824 "params": { 00:24:41.824 "name": "key0", 00:24:41.824 "path": "/tmp/tmp.F4dj8o2U6Z" 00:24:41.824 } 00:24:41.824 } 00:24:41.824 ] 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "subsystem": "iobuf", 00:24:41.824 "config": [ 00:24:41.824 { 00:24:41.824 "method": "iobuf_set_options", 00:24:41.824 "params": { 00:24:41.824 "small_pool_count": 8192, 00:24:41.824 "large_pool_count": 1024, 00:24:41.824 "small_bufsize": 8192, 00:24:41.824 "large_bufsize": 135168 00:24:41.824 } 00:24:41.824 } 00:24:41.824 ] 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "subsystem": "sock", 00:24:41.824 "config": [ 00:24:41.824 { 00:24:41.824 "method": "sock_set_default_impl", 00:24:41.824 "params": { 00:24:41.824 "impl_name": "posix" 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "sock_impl_set_options", 00:24:41.824 "params": { 00:24:41.824 "impl_name": "ssl", 00:24:41.824 "recv_buf_size": 4096, 00:24:41.824 "send_buf_size": 4096, 00:24:41.824 "enable_recv_pipe": true, 00:24:41.824 "enable_quickack": false, 00:24:41.824 "enable_placement_id": 0, 00:24:41.824 "enable_zerocopy_send_server": true, 00:24:41.824 "enable_zerocopy_send_client": false, 00:24:41.824 "zerocopy_threshold": 0, 00:24:41.824 "tls_version": 0, 00:24:41.824 "enable_ktls": false 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "sock_impl_set_options", 00:24:41.824 "params": { 00:24:41.824 "impl_name": "posix", 00:24:41.824 "recv_buf_size": 2097152, 00:24:41.824 "send_buf_size": 2097152, 00:24:41.824 "enable_recv_pipe": true, 00:24:41.824 "enable_quickack": false, 00:24:41.824 "enable_placement_id": 0, 00:24:41.824 "enable_zerocopy_send_server": true, 00:24:41.824 "enable_zerocopy_send_client": false, 00:24:41.824 "zerocopy_threshold": 0, 00:24:41.824 "tls_version": 0, 00:24:41.824 "enable_ktls": false 00:24:41.824 } 00:24:41.824 } 00:24:41.824 ] 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "subsystem": "vmd", 00:24:41.824 "config": [] 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "subsystem": "accel", 00:24:41.824 "config": [ 00:24:41.824 { 00:24:41.824 "method": "accel_set_options", 00:24:41.824 "params": { 00:24:41.824 "small_cache_size": 128, 00:24:41.824 "large_cache_size": 16, 00:24:41.824 "task_count": 2048, 00:24:41.824 "sequence_count": 2048, 00:24:41.824 "buf_count": 2048 00:24:41.824 } 00:24:41.824 } 00:24:41.824 ] 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "subsystem": "bdev", 00:24:41.824 "config": [ 00:24:41.824 { 00:24:41.824 "method": "bdev_set_options", 00:24:41.824 "params": { 00:24:41.824 "bdev_io_pool_size": 65535, 00:24:41.824 "bdev_io_cache_size": 256, 00:24:41.824 "bdev_auto_examine": true, 00:24:41.824 "iobuf_small_cache_size": 128, 00:24:41.824 "iobuf_large_cache_size": 16 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "bdev_raid_set_options", 00:24:41.824 "params": { 00:24:41.824 "process_window_size_kb": 1024, 00:24:41.824 "process_max_bandwidth_mb_sec": 0 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "bdev_iscsi_set_options", 00:24:41.824 "params": { 00:24:41.824 "timeout_sec": 30 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "bdev_nvme_set_options", 00:24:41.824 "params": { 00:24:41.824 "action_on_timeout": "none", 00:24:41.824 "timeout_us": 0, 00:24:41.824 "timeout_admin_us": 0, 00:24:41.824 "keep_alive_timeout_ms": 10000, 00:24:41.824 "arbitration_burst": 0, 00:24:41.824 "low_priority_weight": 0, 00:24:41.824 "medium_priority_weight": 0, 00:24:41.824 "high_priority_weight": 0, 00:24:41.824 "nvme_adminq_poll_period_us": 10000, 00:24:41.824 "nvme_ioq_poll_period_us": 0, 00:24:41.824 "io_queue_requests": 512, 00:24:41.824 "delay_cmd_submit": true, 00:24:41.824 "transport_retry_count": 4, 00:24:41.824 "bdev_retry_count": 3, 00:24:41.824 "transport_ack_timeout": 0, 00:24:41.824 "ctrlr_loss_timeout_sec": 0, 00:24:41.824 "reconnect_delay_sec": 0, 00:24:41.824 "fast_io_fail_timeout_sec": 0, 00:24:41.824 "disable_auto_failback": false, 00:24:41.824 "generate_uuids": false, 00:24:41.824 "transport_tos": 0, 00:24:41.824 "nvme_error_stat": false, 00:24:41.824 "rdma_srq_size": 0, 00:24:41.824 "io_path_stat": false, 00:24:41.824 "allow_accel_sequence": false, 00:24:41.824 "rdma_max_cq_size": 0, 00:24:41.824 "rdma_cm_event_timeout_ms": 0, 00:24:41.824 "dhchap_digests": [ 00:24:41.824 "sha256", 00:24:41.824 "sha384", 00:24:41.824 "sha512" 00:24:41.824 ], 00:24:41.824 "dhchap_dhgroups": [ 00:24:41.824 "null", 00:24:41.824 "ffdhe2048", 00:24:41.824 "ffdhe3072", 00:24:41.824 "ffdhe4096", 00:24:41.824 "ffdhe6144", 00:24:41.824 "ffdhe8192" 00:24:41.824 ] 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "bdev_nvme_attach_controller", 00:24:41.824 "params": { 00:24:41.824 "name": "nvme0", 00:24:41.824 "trtype": "TCP", 00:24:41.824 "adrfam": "IPv4", 00:24:41.824 "traddr": "10.0.0.2", 00:24:41.824 "trsvcid": "4420", 00:24:41.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.824 "prchk_reftag": false, 00:24:41.824 "prchk_guard": false, 00:24:41.824 "ctrlr_loss_timeout_sec": 0, 00:24:41.824 "reconnect_delay_sec": 0, 00:24:41.824 "fast_io_fail_timeout_sec": 0, 00:24:41.824 "psk": "key0", 00:24:41.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:41.824 "hdgst": false, 00:24:41.824 "ddgst": false 00:24:41.824 } 00:24:41.824 }, 00:24:41.824 { 00:24:41.824 "method": "bdev_nvme_set_hotplug", 00:24:41.824 "params": { 00:24:41.824 "period_us": 100000, 00:24:41.824 "enable": false 00:24:41.824 } 00:24:41.824 }, 00:24:41.825 { 00:24:41.825 "method": "bdev_enable_histogram", 00:24:41.825 "params": { 00:24:41.825 "name": "nvme0n1", 00:24:41.825 "enable": true 00:24:41.825 } 00:24:41.825 }, 00:24:41.825 { 00:24:41.825 "method": "bdev_wait_for_examine" 00:24:41.825 } 00:24:41.825 ] 00:24:41.825 }, 00:24:41.825 { 00:24:41.825 "subsystem": "nbd", 00:24:41.825 "config": [] 00:24:41.825 } 00:24:41.825 ] 00:24:41.825 }' 00:24:41.825 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.825 01:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.825 [2024-10-01 01:42:21.598910] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:41.825 [2024-10-01 01:42:21.599036] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944160 ] 00:24:41.825 [2024-10-01 01:42:21.659010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.082 [2024-10-01 01:42:21.749326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.082 [2024-10-01 01:42:21.931331] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.014 01:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.014 01:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:43.014 01:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.014 01:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:43.014 01:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.014 01:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.271 Running I/O for 1 seconds... 00:24:44.204 3215.00 IOPS, 12.56 MiB/s 00:24:44.204 Latency(us) 00:24:44.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.204 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.204 Verification LBA range: start 0x0 length 0x2000 00:24:44.204 nvme0n1 : 1.02 3264.30 12.75 0.00 0.00 38810.14 6747.78 45049.93 00:24:44.204 =================================================================================================================== 00:24:44.204 Total : 3264.30 12.75 0.00 0.00 38810.14 6747.78 45049.93 00:24:44.204 { 00:24:44.204 "results": [ 00:24:44.204 { 00:24:44.204 "job": "nvme0n1", 00:24:44.204 "core_mask": "0x2", 00:24:44.204 "workload": "verify", 00:24:44.204 "status": "finished", 00:24:44.204 "verify_range": { 00:24:44.204 "start": 0, 00:24:44.204 "length": 8192 00:24:44.204 }, 00:24:44.204 "queue_depth": 128, 00:24:44.204 "io_size": 4096, 00:24:44.204 "runtime": 1.02411, 00:24:44.204 "iops": 3264.297780511859, 00:24:44.204 "mibps": 12.751163205124449, 00:24:44.204 "io_failed": 0, 00:24:44.204 "io_timeout": 0, 00:24:44.204 "avg_latency_us": 38810.14024307287, 00:24:44.204 "min_latency_us": 6747.780740740741, 00:24:44.204 "max_latency_us": 45049.93185185185 00:24:44.204 } 00:24:44.204 ], 00:24:44.204 "core_count": 1 00:24:44.204 } 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:44.204 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:44.204 nvmf_trace.0 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 944160 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 944160 ']' 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 944160 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944160 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944160' 00:24:44.462 killing process with pid 944160 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 944160 00:24:44.462 Received shutdown signal, test time was about 1.000000 seconds 00:24:44.462 00:24:44.462 Latency(us) 00:24:44.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.462 =================================================================================================================== 00:24:44.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.462 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 944160 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.720 rmmod nvme_tcp 00:24:44.720 rmmod nvme_fabrics 00:24:44.720 rmmod nvme_keyring 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 944010 ']' 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 944010 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 944010 ']' 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 944010 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944010 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944010' 00:24:44.720 killing process with pid 944010 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 944010 00:24:44.720 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 944010 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.980 01:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7H3g3cf1k9 /tmp/tmp.HLfiPsVE2r /tmp/tmp.F4dj8o2U6Z 00:24:47.513 00:24:47.513 real 1m24.370s 00:24:47.513 user 2m20.344s 00:24:47.513 sys 0m26.501s 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.513 ************************************ 00:24:47.513 END TEST nvmf_tls 00:24:47.513 ************************************ 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:47.513 01:42:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:47.514 ************************************ 00:24:47.514 START TEST nvmf_fips 00:24:47.514 ************************************ 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:47.514 * Looking for test storage... 00:24:47.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.514 --rc genhtml_branch_coverage=1 00:24:47.514 --rc genhtml_function_coverage=1 00:24:47.514 --rc genhtml_legend=1 00:24:47.514 --rc geninfo_all_blocks=1 00:24:47.514 --rc geninfo_unexecuted_blocks=1 00:24:47.514 00:24:47.514 ' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.514 --rc genhtml_branch_coverage=1 00:24:47.514 --rc genhtml_function_coverage=1 00:24:47.514 --rc genhtml_legend=1 00:24:47.514 --rc geninfo_all_blocks=1 00:24:47.514 --rc geninfo_unexecuted_blocks=1 00:24:47.514 00:24:47.514 ' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.514 --rc genhtml_branch_coverage=1 00:24:47.514 --rc genhtml_function_coverage=1 00:24:47.514 --rc genhtml_legend=1 00:24:47.514 --rc geninfo_all_blocks=1 00:24:47.514 --rc geninfo_unexecuted_blocks=1 00:24:47.514 00:24:47.514 ' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.514 --rc genhtml_branch_coverage=1 00:24:47.514 --rc genhtml_function_coverage=1 00:24:47.514 --rc genhtml_legend=1 00:24:47.514 --rc geninfo_all_blocks=1 00:24:47.514 --rc geninfo_unexecuted_blocks=1 00:24:47.514 00:24:47.514 ' 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.514 01:42:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.514 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:47.515 Error setting digest 00:24:47.515 40D24A75F47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:47.515 40D24A75F47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.515 01:42:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:49.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:49.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:49.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:49.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.417 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.418 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:24:49.676 00:24:49.676 --- 10.0.0.2 ping statistics --- 00:24:49.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.676 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:49.676 00:24:49.676 --- 10.0.0.1 ping statistics --- 00:24:49.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.676 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=946525 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 946525 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 946525 ']' 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.676 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:49.676 [2024-10-01 01:42:29.479562] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:49.677 [2024-10-01 01:42:29.479639] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.936 [2024-10-01 01:42:29.545554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.936 [2024-10-01 01:42:29.629427] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.936 [2024-10-01 01:42:29.629485] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.936 [2024-10-01 01:42:29.629510] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.936 [2024-10-01 01:42:29.629521] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.936 [2024-10-01 01:42:29.629530] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.936 [2024-10-01 01:42:29.629557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N1G 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N1G 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N1G 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N1G 00:24:49.936 01:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:50.194 [2024-10-01 01:42:30.026271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.194 [2024-10-01 01:42:30.042278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.194 [2024-10-01 01:42:30.042562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.453 malloc0 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=946551 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 946551 /var/tmp/bdevperf.sock 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 946551 ']' 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.453 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:50.453 [2024-10-01 01:42:30.192972] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:24:50.453 [2024-10-01 01:42:30.193102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946551 ] 00:24:50.453 [2024-10-01 01:42:30.256863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.711 [2024-10-01 01:42:30.348076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.711 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.711 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:50.711 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N1G 00:24:50.969 01:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:51.227 [2024-10-01 01:42:30.983112] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.227 TLSTESTn1 00:24:51.227 01:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.486 Running I/O for 10 seconds... 00:25:01.763 3478.00 IOPS, 13.59 MiB/s 3502.50 IOPS, 13.68 MiB/s 3539.00 IOPS, 13.82 MiB/s 3575.25 IOPS, 13.97 MiB/s 3581.40 IOPS, 13.99 MiB/s 3582.17 IOPS, 13.99 MiB/s 3593.29 IOPS, 14.04 MiB/s 3601.75 IOPS, 14.07 MiB/s 3607.56 IOPS, 14.09 MiB/s 3602.60 IOPS, 14.07 MiB/s 00:25:01.763 Latency(us) 00:25:01.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.763 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.763 Verification LBA range: start 0x0 length 0x2000 00:25:01.763 TLSTESTn1 : 10.03 3606.07 14.09 0.00 0.00 35430.82 9951.76 38253.61 00:25:01.763 =================================================================================================================== 00:25:01.763 Total : 3606.07 14.09 0.00 0.00 35430.82 9951.76 38253.61 00:25:01.763 { 00:25:01.763 "results": [ 00:25:01.763 { 00:25:01.763 "job": "TLSTESTn1", 00:25:01.763 "core_mask": "0x4", 00:25:01.763 "workload": "verify", 00:25:01.763 "status": "finished", 00:25:01.763 "verify_range": { 00:25:01.763 "start": 0, 00:25:01.763 "length": 8192 00:25:01.763 }, 00:25:01.763 "queue_depth": 128, 00:25:01.763 "io_size": 4096, 00:25:01.763 "runtime": 10.025603, 00:25:01.763 "iops": 3606.067385672463, 00:25:01.763 "mibps": 14.086200725283058, 00:25:01.763 "io_failed": 0, 00:25:01.763 "io_timeout": 0, 00:25:01.763 "avg_latency_us": 35430.8221754662, 00:25:01.763 "min_latency_us": 9951.762962962963, 00:25:01.763 "max_latency_us": 38253.60592592593 00:25:01.763 } 00:25:01.763 ], 00:25:01.763 "core_count": 1 00:25:01.763 } 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:01.763 nvmf_trace.0 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 946551 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 946551 ']' 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 946551 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 946551 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 946551' 00:25:01.763 killing process with pid 946551 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 946551 00:25:01.763 Received shutdown signal, test time was about 10.000000 seconds 00:25:01.763 00:25:01.763 Latency(us) 00:25:01.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.763 =================================================================================================================== 00:25:01.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 946551 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.763 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.763 rmmod nvme_tcp 00:25:02.022 rmmod nvme_fabrics 00:25:02.022 rmmod nvme_keyring 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 946525 ']' 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 946525 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 946525 ']' 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 946525 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 946525 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 946525' 00:25:02.022 killing process with pid 946525 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 946525 00:25:02.022 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 946525 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.280 01:42:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N1G 00:25:04.188 00:25:04.188 real 0m17.142s 00:25:04.188 user 0m22.624s 00:25:04.188 sys 0m5.402s 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:04.188 ************************************ 00:25:04.188 END TEST nvmf_fips 00:25:04.188 ************************************ 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.188 01:42:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:04.188 ************************************ 00:25:04.188 START TEST nvmf_control_msg_list 00:25:04.188 ************************************ 00:25:04.188 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:04.446 * Looking for test storage... 00:25:04.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.446 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:04.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.447 --rc genhtml_branch_coverage=1 00:25:04.447 --rc genhtml_function_coverage=1 00:25:04.447 --rc genhtml_legend=1 00:25:04.447 --rc geninfo_all_blocks=1 00:25:04.447 --rc geninfo_unexecuted_blocks=1 00:25:04.447 00:25:04.447 ' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:04.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.447 --rc genhtml_branch_coverage=1 00:25:04.447 --rc genhtml_function_coverage=1 00:25:04.447 --rc genhtml_legend=1 00:25:04.447 --rc geninfo_all_blocks=1 00:25:04.447 --rc geninfo_unexecuted_blocks=1 00:25:04.447 00:25:04.447 ' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:04.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.447 --rc genhtml_branch_coverage=1 00:25:04.447 --rc genhtml_function_coverage=1 00:25:04.447 --rc genhtml_legend=1 00:25:04.447 --rc geninfo_all_blocks=1 00:25:04.447 --rc geninfo_unexecuted_blocks=1 00:25:04.447 00:25:04.447 ' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:04.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.447 --rc genhtml_branch_coverage=1 00:25:04.447 --rc genhtml_function_coverage=1 00:25:04.447 --rc genhtml_legend=1 00:25:04.447 --rc geninfo_all_blocks=1 00:25:04.447 --rc geninfo_unexecuted_blocks=1 00:25:04.447 00:25:04.447 ' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:04.447 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.448 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.448 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.448 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:04.448 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:04.448 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:04.448 01:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:06.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:06.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:06.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:06.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.979 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:25:06.980 00:25:06.980 --- 10.0.0.2 ping statistics --- 00:25:06.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.980 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:06.980 00:25:06.980 --- 10.0.0.1 ping statistics --- 00:25:06.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.980 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=949939 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 949939 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 949939 ']' 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.980 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.980 [2024-10-01 01:42:46.578517] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:06.980 [2024-10-01 01:42:46.578605] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.980 [2024-10-01 01:42:46.648721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.980 [2024-10-01 01:42:46.745195] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.980 [2024-10-01 01:42:46.745265] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.980 [2024-10-01 01:42:46.745279] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.980 [2024-10-01 01:42:46.745291] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.980 [2024-10-01 01:42:46.745317] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.980 [2024-10-01 01:42:46.745357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 [2024-10-01 01:42:46.900151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 Malloc0 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.237 [2024-10-01 01:42:46.953573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=949959 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=949960 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=949961 00:25:07.237 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 949959 00:25:07.238 01:42:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.238 [2024-10-01 01:42:47.012007] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.238 [2024-10-01 01:42:47.022027] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.238 [2024-10-01 01:42:47.022266] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.608 Initializing NVMe Controllers 00:25:08.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:08.608 Initialization complete. Launching workers. 00:25:08.608 ======================================================== 00:25:08.608 Latency(us) 00:25:08.608 Device Information : IOPS MiB/s Average min max 00:25:08.608 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 184.00 0.72 5649.64 208.23 41960.16 00:25:08.608 ======================================================== 00:25:08.608 Total : 184.00 0.72 5649.64 208.23 41960.16 00:25:08.608 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 949960 00:25:08.608 Initializing NVMe Controllers 00:25:08.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:08.608 Initialization complete. Launching workers. 00:25:08.608 ======================================================== 00:25:08.608 Latency(us) 00:25:08.608 Device Information : IOPS MiB/s Average min max 00:25:08.608 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41171.39 40826.04 41947.70 00:25:08.608 ======================================================== 00:25:08.608 Total : 25.00 0.10 41171.39 40826.04 41947.70 00:25:08.608 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 949961 00:25:08.608 Initializing NVMe Controllers 00:25:08.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:08.608 Initialization complete. Launching workers. 00:25:08.608 ======================================================== 00:25:08.608 Latency(us) 00:25:08.608 Device Information : IOPS MiB/s Average min max 00:25:08.608 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4646.89 18.15 214.82 202.08 350.47 00:25:08.608 ======================================================== 00:25:08.608 Total : 4646.89 18.15 214.82 202.08 350.47 00:25:08.608 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.608 rmmod nvme_tcp 00:25:08.608 rmmod nvme_fabrics 00:25:08.608 rmmod nvme_keyring 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 949939 ']' 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 949939 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 949939 ']' 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 949939 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 949939 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 949939' 00:25:08.608 killing process with pid 949939 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 949939 00:25:08.608 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 949939 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.868 01:42:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.402 00:25:11.402 real 0m6.660s 00:25:11.402 user 0m5.939s 00:25:11.402 sys 0m2.799s 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.402 ************************************ 00:25:11.402 END TEST nvmf_control_msg_list 00:25:11.402 ************************************ 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.402 ************************************ 00:25:11.402 START TEST nvmf_wait_for_buf 00:25:11.402 ************************************ 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.402 * Looking for test storage... 00:25:11.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:11.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.402 --rc genhtml_branch_coverage=1 00:25:11.402 --rc genhtml_function_coverage=1 00:25:11.402 --rc genhtml_legend=1 00:25:11.402 --rc geninfo_all_blocks=1 00:25:11.402 --rc geninfo_unexecuted_blocks=1 00:25:11.402 00:25:11.402 ' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:11.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.402 --rc genhtml_branch_coverage=1 00:25:11.402 --rc genhtml_function_coverage=1 00:25:11.402 --rc genhtml_legend=1 00:25:11.402 --rc geninfo_all_blocks=1 00:25:11.402 --rc geninfo_unexecuted_blocks=1 00:25:11.402 00:25:11.402 ' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:11.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.402 --rc genhtml_branch_coverage=1 00:25:11.402 --rc genhtml_function_coverage=1 00:25:11.402 --rc genhtml_legend=1 00:25:11.402 --rc geninfo_all_blocks=1 00:25:11.402 --rc geninfo_unexecuted_blocks=1 00:25:11.402 00:25:11.402 ' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:11.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.402 --rc genhtml_branch_coverage=1 00:25:11.402 --rc genhtml_function_coverage=1 00:25:11.402 --rc genhtml_legend=1 00:25:11.402 --rc geninfo_all_blocks=1 00:25:11.402 --rc geninfo_unexecuted_blocks=1 00:25:11.402 00:25:11.402 ' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.402 01:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.306 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.307 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.307 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.307 01:42:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:13.307 00:25:13.307 --- 10.0.0.2 ping statistics --- 00:25:13.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.307 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:13.307 00:25:13.307 --- 10.0.0.1 ping statistics --- 00:25:13.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.307 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=952151 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 952151 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 952151 ']' 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.307 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.565 [2024-10-01 01:42:53.200635] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:13.565 [2024-10-01 01:42:53.200702] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.565 [2024-10-01 01:42:53.267075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.565 [2024-10-01 01:42:53.358354] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.565 [2024-10-01 01:42:53.358410] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.565 [2024-10-01 01:42:53.358439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.565 [2024-10-01 01:42:53.358453] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.565 [2024-10-01 01:42:53.358465] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.565 [2024-10-01 01:42:53.358505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 Malloc0 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 [2024-10-01 01:42:53.574099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.824 [2024-10-01 01:42:53.598339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.824 01:42:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:13.824 [2024-10-01 01:42:53.669123] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:15.722 Initializing NVMe Controllers 00:25:15.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:15.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:15.722 Initialization complete. Launching workers. 00:25:15.722 ======================================================== 00:25:15.722 Latency(us) 00:25:15.722 Device Information : IOPS MiB/s Average min max 00:25:15.722 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.55 16.07 32230.37 8001.15 63852.55 00:25:15.722 ======================================================== 00:25:15.722 Total : 128.55 16.07 32230.37 8001.15 63852.55 00:25:15.722 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:15.722 rmmod nvme_tcp 00:25:15.722 rmmod nvme_fabrics 00:25:15.722 rmmod nvme_keyring 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 952151 ']' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 952151 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 952151 ']' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 952151 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 952151 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 952151' 00:25:15.722 killing process with pid 952151 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 952151 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 952151 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.722 01:42:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:18.254 00:25:18.254 real 0m6.776s 00:25:18.254 user 0m3.178s 00:25:18.254 sys 0m2.065s 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:18.254 ************************************ 00:25:18.254 END TEST nvmf_wait_for_buf 00:25:18.254 ************************************ 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:18.254 ************************************ 00:25:18.254 START TEST nvmf_fuzz 00:25:18.254 ************************************ 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:18.254 * Looking for test storage... 00:25:18.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.254 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.255 --rc genhtml_branch_coverage=1 00:25:18.255 --rc genhtml_function_coverage=1 00:25:18.255 --rc genhtml_legend=1 00:25:18.255 --rc geninfo_all_blocks=1 00:25:18.255 --rc geninfo_unexecuted_blocks=1 00:25:18.255 00:25:18.255 ' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.255 --rc genhtml_branch_coverage=1 00:25:18.255 --rc genhtml_function_coverage=1 00:25:18.255 --rc genhtml_legend=1 00:25:18.255 --rc geninfo_all_blocks=1 00:25:18.255 --rc geninfo_unexecuted_blocks=1 00:25:18.255 00:25:18.255 ' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.255 --rc genhtml_branch_coverage=1 00:25:18.255 --rc genhtml_function_coverage=1 00:25:18.255 --rc genhtml_legend=1 00:25:18.255 --rc geninfo_all_blocks=1 00:25:18.255 --rc geninfo_unexecuted_blocks=1 00:25:18.255 00:25:18.255 ' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:18.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.255 --rc genhtml_branch_coverage=1 00:25:18.255 --rc genhtml_function_coverage=1 00:25:18.255 --rc genhtml_legend=1 00:25:18.255 --rc geninfo_all_blocks=1 00:25:18.255 --rc geninfo_unexecuted_blocks=1 00:25:18.255 00:25:18.255 ' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.255 01:42:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:20.160 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:20.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:20.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:20.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.160 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:25:20.161 00:25:20.161 --- 10.0.0.2 ping statistics --- 00:25:20.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.161 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:25:20.161 00:25:20.161 --- 10.0.0.1 ping statistics --- 00:25:20.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.161 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=954371 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 954371 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 954371 ']' 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.161 01:42:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 Malloc0 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:20.729 01:43:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:52.850 Fuzzing completed. Shutting down the fuzz application 00:25:52.850 00:25:52.850 Dumping successful admin opcodes: 00:25:52.850 8, 9, 10, 24, 00:25:52.850 Dumping successful io opcodes: 00:25:52.850 0, 9, 00:25:52.850 NS: 0x200003aeff00 I/O qp, Total commands completed: 453053, total successful commands: 2632, random_seed: 2281783168 00:25:52.850 NS: 0x200003aeff00 admin qp, Total commands completed: 55072, total successful commands: 440, random_seed: 2695160768 00:25:52.850 01:43:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:52.850 Fuzzing completed. Shutting down the fuzz application 00:25:52.850 00:25:52.850 Dumping successful admin opcodes: 00:25:52.850 24, 00:25:52.850 Dumping successful io opcodes: 00:25:52.850 00:25:52.850 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3715405643 00:25:52.850 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3715531895 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.850 rmmod nvme_tcp 00:25:52.850 rmmod nvme_fabrics 00:25:52.850 rmmod nvme_keyring 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 954371 ']' 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 954371 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 954371 ']' 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 954371 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 954371 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 954371' 00:25:52.850 killing process with pid 954371 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 954371 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 954371 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:52.850 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:52.851 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:52.851 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:52.851 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:25:52.851 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:52.851 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:25:53.110 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.110 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:53.110 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.110 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.110 01:43:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:55.019 00:25:55.019 real 0m37.223s 00:25:55.019 user 0m50.849s 00:25:55.019 sys 0m15.508s 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 ************************************ 00:25:55.019 END TEST nvmf_fuzz 00:25:55.019 ************************************ 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 ************************************ 00:25:55.019 START TEST nvmf_multiconnection 00:25:55.019 ************************************ 00:25:55.019 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.279 * Looking for test storage... 00:25:55.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.279 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:55.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.280 --rc genhtml_branch_coverage=1 00:25:55.280 --rc genhtml_function_coverage=1 00:25:55.280 --rc genhtml_legend=1 00:25:55.280 --rc geninfo_all_blocks=1 00:25:55.280 --rc geninfo_unexecuted_blocks=1 00:25:55.280 00:25:55.280 ' 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:55.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.280 --rc genhtml_branch_coverage=1 00:25:55.280 --rc genhtml_function_coverage=1 00:25:55.280 --rc genhtml_legend=1 00:25:55.280 --rc geninfo_all_blocks=1 00:25:55.280 --rc geninfo_unexecuted_blocks=1 00:25:55.280 00:25:55.280 ' 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:55.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.280 --rc genhtml_branch_coverage=1 00:25:55.280 --rc genhtml_function_coverage=1 00:25:55.280 --rc genhtml_legend=1 00:25:55.280 --rc geninfo_all_blocks=1 00:25:55.280 --rc geninfo_unexecuted_blocks=1 00:25:55.280 00:25:55.280 ' 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:55.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.280 --rc genhtml_branch_coverage=1 00:25:55.280 --rc genhtml_function_coverage=1 00:25:55.280 --rc genhtml_legend=1 00:25:55.280 --rc geninfo_all_blocks=1 00:25:55.280 --rc geninfo_unexecuted_blocks=1 00:25:55.280 00:25:55.280 ' 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.280 01:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.280 01:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.817 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:25:57.818 00:25:57.818 --- 10.0.0.2 ping statistics --- 00:25:57.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.818 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:25:57.818 00:25:57.818 --- 10.0.0.1 ping statistics --- 00:25:57.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.818 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=959990 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 959990 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 959990 ']' 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:57.818 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.818 [2024-10-01 01:43:37.557798] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:25:57.818 [2024-10-01 01:43:37.557886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.818 [2024-10-01 01:43:37.632133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.077 [2024-10-01 01:43:37.728442] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.077 [2024-10-01 01:43:37.728499] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.077 [2024-10-01 01:43:37.728523] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.077 [2024-10-01 01:43:37.728537] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.077 [2024-10-01 01:43:37.728549] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.077 [2024-10-01 01:43:37.728611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.077 [2024-10-01 01:43:37.728679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.077 [2024-10-01 01:43:37.728702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.077 [2024-10-01 01:43:37.728705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.077 [2024-10-01 01:43:37.870158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.077 Malloc1 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.077 [2024-10-01 01:43:37.924864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.077 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.336 Malloc2 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.336 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 Malloc3 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 Malloc4 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 Malloc5 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 Malloc6 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.337 Malloc7 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.337 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 Malloc8 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 Malloc9 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 Malloc10 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.598 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.599 Malloc11 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.599 01:43:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:59.169 01:43:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:59.169 01:43:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:59.169 01:43:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.169 01:43:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:59.169 01:43:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.708 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:01.967 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:01.967 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:01.967 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.967 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:01.967 01:43:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.879 01:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:04.450 01:43:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:04.450 01:43:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:04.450 01:43:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.450 01:43:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:04.450 01:43:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.986 01:43:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:07.244 01:43:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:07.244 01:43:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.244 01:43:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.244 01:43:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.244 01:43:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.782 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:10.043 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:10.043 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:10.043 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.043 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:10.043 01:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.950 01:43:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:12.888 01:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:12.888 01:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:12.888 01:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.888 01:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:12.888 01:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:14.795 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.796 01:43:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:15.730 01:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:15.730 01:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.730 01:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.730 01:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.730 01:43:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.630 01:43:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:18.569 01:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:18.569 01:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:18.569 01:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.569 01:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:18.569 01:43:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.479 01:44:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:21.416 01:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:21.416 01:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:21.416 01:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.416 01:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:21.416 01:44:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.331 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:24.303 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:24.303 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:24.303 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.303 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:24.303 01:44:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.212 01:44:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:27.151 01:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:27.151 01:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:27.151 01:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.151 01:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:27.151 01:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:29.684 01:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:29.684 [global] 00:26:29.684 thread=1 00:26:29.684 invalidate=1 00:26:29.684 rw=read 00:26:29.684 time_based=1 00:26:29.684 runtime=10 00:26:29.684 ioengine=libaio 00:26:29.684 direct=1 00:26:29.684 bs=262144 00:26:29.684 iodepth=64 00:26:29.684 norandommap=1 00:26:29.684 numjobs=1 00:26:29.684 00:26:29.684 [job0] 00:26:29.684 filename=/dev/nvme0n1 00:26:29.684 [job1] 00:26:29.684 filename=/dev/nvme10n1 00:26:29.684 [job2] 00:26:29.684 filename=/dev/nvme1n1 00:26:29.684 [job3] 00:26:29.684 filename=/dev/nvme2n1 00:26:29.684 [job4] 00:26:29.684 filename=/dev/nvme3n1 00:26:29.684 [job5] 00:26:29.684 filename=/dev/nvme4n1 00:26:29.684 [job6] 00:26:29.684 filename=/dev/nvme5n1 00:26:29.684 [job7] 00:26:29.684 filename=/dev/nvme6n1 00:26:29.684 [job8] 00:26:29.684 filename=/dev/nvme7n1 00:26:29.684 [job9] 00:26:29.684 filename=/dev/nvme8n1 00:26:29.684 [job10] 00:26:29.684 filename=/dev/nvme9n1 00:26:29.684 Could not set queue depth (nvme0n1) 00:26:29.684 Could not set queue depth (nvme10n1) 00:26:29.684 Could not set queue depth (nvme1n1) 00:26:29.684 Could not set queue depth (nvme2n1) 00:26:29.684 Could not set queue depth (nvme3n1) 00:26:29.684 Could not set queue depth (nvme4n1) 00:26:29.684 Could not set queue depth (nvme5n1) 00:26:29.684 Could not set queue depth (nvme6n1) 00:26:29.684 Could not set queue depth (nvme7n1) 00:26:29.684 Could not set queue depth (nvme8n1) 00:26:29.684 Could not set queue depth (nvme9n1) 00:26:29.684 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:29.684 fio-3.35 00:26:29.684 Starting 11 threads 00:26:41.884 00:26:41.884 job0: (groupid=0, jobs=1): err= 0: pid=964229: Tue Oct 1 01:44:19 2024 00:26:41.884 read: IOPS=311, BW=77.8MiB/s (81.6MB/s)(791MiB/10163msec) 00:26:41.884 slat (usec): min=13, max=785273, avg=2834.18, stdev=23432.18 00:26:41.884 clat (usec): min=1598, max=1869.1k, avg=202697.16, stdev=320605.12 00:26:41.884 lat (usec): min=1617, max=1869.2k, avg=205531.34, stdev=324975.56 00:26:41.884 clat percentiles (msec): 00:26:41.884 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 52], 20.00th=[ 60], 00:26:41.884 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 71], 00:26:41.884 | 70.00th=[ 106], 80.00th=[ 167], 90.00th=[ 735], 95.00th=[ 1036], 00:26:41.884 | 99.00th=[ 1485], 99.50th=[ 1485], 99.90th=[ 1485], 99.95th=[ 1485], 00:26:41.884 | 99.99th=[ 1871] 00:26:41.884 bw ( KiB/s): min= 512, max=265728, per=14.33%, avg=79305.85, stdev=94593.44, samples=20 00:26:41.884 iops : min= 2, max= 1038, avg=309.75, stdev=369.53, samples=20 00:26:41.884 lat (msec) : 2=0.16%, 4=0.57%, 10=0.19%, 20=2.72%, 50=6.01% 00:26:41.884 lat (msec) : 100=59.52%, 250=14.71%, 500=2.69%, 750=3.54%, 1000=4.21% 00:26:41.884 lat (msec) : 2000=5.69% 00:26:41.884 cpu : usr=0.21%, sys=1.21%, ctx=798, majf=0, minf=4098 00:26:41.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:41.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.884 issued rwts: total=3162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.884 job1: (groupid=0, jobs=1): err= 0: pid=964231: Tue Oct 1 01:44:19 2024 00:26:41.884 read: IOPS=210, BW=52.5MiB/s (55.1MB/s)(538MiB/10230msec) 00:26:41.884 slat (usec): min=9, max=834871, avg=2359.59, stdev=27102.57 00:26:41.884 clat (usec): min=1043, max=1620.8k, avg=301886.03, stdev=390118.20 00:26:41.884 lat (usec): min=1078, max=1877.2k, avg=304245.62, stdev=394395.30 00:26:41.884 clat percentiles (usec): 00:26:41.884 | 1.00th=[ 1450], 5.00th=[ 3032], 10.00th=[ 3818], 00:26:41.884 | 20.00th=[ 5473], 30.00th=[ 10028], 40.00th=[ 28181], 00:26:41.884 | 50.00th=[ 67634], 60.00th=[ 254804], 70.00th=[ 450888], 00:26:41.884 | 80.00th=[ 608175], 90.00th=[ 926942], 95.00th=[1149240], 00:26:41.884 | 99.00th=[1535116], 99.50th=[1568670], 99.90th=[1568670], 00:26:41.884 | 99.95th=[1568670], 99.99th=[1619002] 00:26:41.884 bw ( KiB/s): min= 6656, max=287744, per=9.66%, avg=53423.20, stdev=76392.26, samples=20 00:26:41.884 iops : min= 26, max= 1124, avg=208.65, stdev=298.41, samples=20 00:26:41.884 lat (msec) : 2=2.60%, 4=11.16%, 10=16.05%, 20=9.30%, 50=9.30% 00:26:41.884 lat (msec) : 100=4.79%, 250=6.51%, 500=12.33%, 750=14.19%, 1000=5.12% 00:26:41.884 lat (msec) : 2000=8.65% 00:26:41.884 cpu : usr=0.11%, sys=0.69%, ctx=765, majf=0, minf=4097 00:26:41.884 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:41.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.884 issued rwts: total=2150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.884 job2: (groupid=0, jobs=1): err= 0: pid=964235: Tue Oct 1 01:44:19 2024 00:26:41.884 read: IOPS=135, BW=33.8MiB/s (35.5MB/s)(346MiB/10236msec) 00:26:41.884 slat (usec): min=12, max=306504, avg=6530.56, stdev=27788.23 00:26:41.884 clat (msec): min=17, max=1655, avg=466.16, stdev=368.01 00:26:41.884 lat (msec): min=17, max=1655, avg=472.69, stdev=373.14 00:26:41.884 clat percentiles (msec): 00:26:41.884 | 1.00th=[ 47], 5.00th=[ 65], 10.00th=[ 79], 20.00th=[ 150], 00:26:41.884 | 30.00th=[ 207], 40.00th=[ 230], 50.00th=[ 330], 60.00th=[ 523], 00:26:41.884 | 70.00th=[ 625], 80.00th=[ 735], 90.00th=[ 1045], 95.00th=[ 1250], 00:26:41.884 | 99.00th=[ 1385], 99.50th=[ 1569], 99.90th=[ 1653], 99.95th=[ 1653], 00:26:41.884 | 99.99th=[ 1653] 00:26:41.884 bw ( KiB/s): min= 9728, max=82944, per=6.11%, avg=33811.80, stdev=23580.61, samples=20 00:26:41.884 iops : min= 38, max= 324, avg=132.05, stdev=92.08, samples=20 00:26:41.884 lat (msec) : 20=0.22%, 50=1.23%, 100=13.36%, 250=28.16%, 500=14.87% 00:26:41.884 lat (msec) : 750=23.25%, 1000=6.79%, 2000=12.13% 00:26:41.884 cpu : usr=0.07%, sys=0.51%, ctx=247, majf=0, minf=4097 00:26:41.884 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:26:41.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.884 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.884 issued rwts: total=1385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.884 job3: (groupid=0, jobs=1): err= 0: pid=964236: Tue Oct 1 01:44:19 2024 00:26:41.884 read: IOPS=232, BW=58.2MiB/s (61.0MB/s)(592MiB/10180msec) 00:26:41.884 slat (usec): min=13, max=229324, avg=4232.93, stdev=14748.56 00:26:41.884 clat (msec): min=22, max=726, avg=270.65, stdev=165.95 00:26:41.884 lat (msec): min=28, max=726, avg=274.88, stdev=168.36 00:26:41.884 clat percentiles (msec): 00:26:41.884 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 52], 20.00th=[ 142], 00:26:41.884 | 30.00th=[ 180], 40.00th=[ 203], 50.00th=[ 224], 60.00th=[ 259], 00:26:41.884 | 70.00th=[ 347], 80.00th=[ 451], 90.00th=[ 518], 95.00th=[ 575], 00:26:41.884 | 99.00th=[ 651], 99.50th=[ 676], 99.90th=[ 726], 99.95th=[ 726], 00:26:41.884 | 99.99th=[ 726] 00:26:41.884 bw ( KiB/s): min=20480, max=176640, per=10.66%, avg=58977.25, stdev=38949.45, samples=20 00:26:41.884 iops : min= 80, max= 690, avg=230.35, stdev=152.15, samples=20 00:26:41.884 lat (msec) : 50=9.42%, 100=5.91%, 250=42.65%, 500=30.19%, 750=11.82% 00:26:41.884 cpu : usr=0.14%, sys=0.84%, ctx=371, majf=0, minf=4097 00:26:41.884 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:41.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.884 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.884 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.884 job4: (groupid=0, jobs=1): err= 0: pid=964237: Tue Oct 1 01:44:19 2024 00:26:41.884 read: IOPS=95, BW=23.8MiB/s (25.0MB/s)(244MiB/10234msec) 00:26:41.884 slat (usec): min=9, max=612272, avg=8959.38, stdev=42373.06 00:26:41.884 clat (usec): min=1552, max=1584.1k, avg=662368.62, stdev=437648.74 00:26:41.884 lat (usec): min=1580, max=1628.2k, avg=671328.00, stdev=440981.01 00:26:41.884 clat percentiles (msec): 00:26:41.884 | 1.00th=[ 10], 5.00th=[ 84], 10.00th=[ 94], 20.00th=[ 146], 00:26:41.884 | 30.00th=[ 405], 40.00th=[ 550], 50.00th=[ 625], 60.00th=[ 709], 00:26:41.884 | 70.00th=[ 885], 80.00th=[ 1133], 90.00th=[ 1318], 95.00th=[ 1368], 00:26:41.884 | 99.00th=[ 1502], 99.50th=[ 1502], 99.90th=[ 1586], 99.95th=[ 1586], 00:26:41.884 | 99.99th=[ 1586] 00:26:41.884 bw ( KiB/s): min= 1536, max=70656, per=4.22%, avg=23321.15, stdev=16668.80, samples=20 00:26:41.885 iops : min= 6, max= 276, avg=91.05, stdev=65.17, samples=20 00:26:41.885 lat (msec) : 2=0.21%, 4=0.21%, 10=0.62%, 20=0.41%, 50=2.77% 00:26:41.885 lat (msec) : 100=7.28%, 250=11.49%, 500=12.92%, 750=27.69%, 1000=9.85% 00:26:41.885 lat (msec) : 2000=26.56% 00:26:41.885 cpu : usr=0.03%, sys=0.38%, ctx=175, majf=0, minf=4097 00:26:41.885 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 job5: (groupid=0, jobs=1): err= 0: pid=964238: Tue Oct 1 01:44:19 2024 00:26:41.885 read: IOPS=360, BW=90.2MiB/s (94.6MB/s)(917MiB/10161msec) 00:26:41.885 slat (usec): min=12, max=501933, avg=2559.53, stdev=17734.84 00:26:41.885 clat (msec): min=2, max=1436, avg=174.58, stdev=255.73 00:26:41.885 lat (msec): min=2, max=1564, avg=177.14, stdev=259.23 00:26:41.885 clat percentiles (msec): 00:26:41.885 | 1.00th=[ 10], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 52], 00:26:41.885 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 70], 60.00th=[ 104], 00:26:41.885 | 70.00th=[ 131], 80.00th=[ 174], 90.00th=[ 592], 95.00th=[ 844], 00:26:41.885 | 99.00th=[ 1150], 99.50th=[ 1284], 99.90th=[ 1301], 99.95th=[ 1435], 00:26:41.885 | 99.99th=[ 1435] 00:26:41.885 bw ( KiB/s): min= 7680, max=313856, per=16.67%, avg=92249.60, stdev=91130.52, samples=20 00:26:41.885 iops : min= 30, max= 1226, avg=360.35, stdev=355.98, samples=20 00:26:41.885 lat (msec) : 4=0.05%, 10=1.20%, 20=0.19%, 50=15.32%, 100=41.93% 00:26:41.885 lat (msec) : 250=28.52%, 500=2.29%, 750=2.97%, 1000=5.02%, 2000=2.51% 00:26:41.885 cpu : usr=0.21%, sys=1.20%, ctx=570, majf=0, minf=4097 00:26:41.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=3668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 job6: (groupid=0, jobs=1): err= 0: pid=964239: Tue Oct 1 01:44:19 2024 00:26:41.885 read: IOPS=250, BW=62.7MiB/s (65.8MB/s)(642MiB/10233msec) 00:26:41.885 slat (usec): min=12, max=386643, avg=3780.86, stdev=20135.62 00:26:41.885 clat (msec): min=11, max=1488, avg=251.00, stdev=315.74 00:26:41.885 lat (msec): min=11, max=1532, avg=254.78, stdev=320.51 00:26:41.885 clat percentiles (msec): 00:26:41.885 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 62], 00:26:41.885 | 30.00th=[ 66], 40.00th=[ 77], 50.00th=[ 108], 60.00th=[ 136], 00:26:41.885 | 70.00th=[ 180], 80.00th=[ 451], 90.00th=[ 693], 95.00th=[ 1036], 00:26:41.885 | 99.00th=[ 1351], 99.50th=[ 1401], 99.90th=[ 1485], 99.95th=[ 1485], 00:26:41.885 | 99.99th=[ 1485] 00:26:41.885 bw ( KiB/s): min=10752, max=266219, per=11.58%, avg=64075.75, stdev=76164.12, samples=20 00:26:41.885 iops : min= 42, max= 1039, avg=250.25, stdev=297.39, samples=20 00:26:41.885 lat (msec) : 20=0.27%, 50=7.13%, 100=39.91%, 250=26.71%, 500=8.06% 00:26:41.885 lat (msec) : 750=9.93%, 1000=2.57%, 2000=5.41% 00:26:41.885 cpu : usr=0.14%, sys=0.93%, ctx=337, majf=0, minf=4098 00:26:41.885 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 job7: (groupid=0, jobs=1): err= 0: pid=964240: Tue Oct 1 01:44:19 2024 00:26:41.885 read: IOPS=172, BW=43.1MiB/s (45.2MB/s)(439MiB/10182msec) 00:26:41.885 slat (usec): min=11, max=1048.9k, avg=4269.86, stdev=31946.87 00:26:41.885 clat (msec): min=20, max=1230, avg=366.97, stdev=251.02 00:26:41.885 lat (msec): min=20, max=1867, avg=371.24, stdev=255.53 00:26:41.885 clat percentiles (msec): 00:26:41.885 | 1.00th=[ 34], 5.00th=[ 51], 10.00th=[ 150], 20.00th=[ 186], 00:26:41.885 | 30.00th=[ 203], 40.00th=[ 234], 50.00th=[ 264], 60.00th=[ 384], 00:26:41.885 | 70.00th=[ 493], 80.00th=[ 550], 90.00th=[ 651], 95.00th=[ 827], 00:26:41.885 | 99.00th=[ 1217], 99.50th=[ 1234], 99.90th=[ 1234], 99.95th=[ 1234], 00:26:41.885 | 99.99th=[ 1234] 00:26:41.885 bw ( KiB/s): min=13312, max=110592, per=8.23%, avg=45536.58, stdev=24867.88, samples=19 00:26:41.885 iops : min= 52, max= 432, avg=177.84, stdev=97.14, samples=19 00:26:41.885 lat (msec) : 50=5.25%, 100=2.34%, 250=38.65%, 500=26.00%, 750=20.30% 00:26:41.885 lat (msec) : 1000=3.59%, 2000=3.88% 00:26:41.885 cpu : usr=0.08%, sys=0.66%, ctx=293, majf=0, minf=3721 00:26:41.885 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=1754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 job8: (groupid=0, jobs=1): err= 0: pid=964241: Tue Oct 1 01:44:19 2024 00:26:41.885 read: IOPS=144, BW=36.0MiB/s (37.8MB/s)(369MiB/10227msec) 00:26:41.885 slat (usec): min=8, max=502768, avg=4310.80, stdev=31789.86 00:26:41.885 clat (usec): min=1816, max=1513.9k, avg=439379.25, stdev=412395.70 00:26:41.885 lat (usec): min=1869, max=1513.9k, avg=443690.05, stdev=416324.63 00:26:41.885 clat percentiles (msec): 00:26:41.885 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 20], 20.00th=[ 58], 00:26:41.885 | 30.00th=[ 93], 40.00th=[ 123], 50.00th=[ 321], 60.00th=[ 542], 00:26:41.885 | 70.00th=[ 693], 80.00th=[ 793], 90.00th=[ 1083], 95.00th=[ 1250], 00:26:41.885 | 99.00th=[ 1435], 99.50th=[ 1485], 99.90th=[ 1519], 99.95th=[ 1519], 00:26:41.885 | 99.99th=[ 1519] 00:26:41.885 bw ( KiB/s): min= 3072, max=107008, per=6.53%, avg=36113.30, stdev=29982.71, samples=20 00:26:41.885 iops : min= 12, max= 418, avg=141.05, stdev=117.10, samples=20 00:26:41.885 lat (msec) : 2=0.27%, 4=4.61%, 10=0.88%, 20=4.55%, 50=6.45% 00:26:41.885 lat (msec) : 100=15.47%, 250=14.93%, 500=10.85%, 750=20.15%, 1000=8.21% 00:26:41.885 lat (msec) : 2000=13.64% 00:26:41.885 cpu : usr=0.14%, sys=0.50%, ctx=383, majf=0, minf=4097 00:26:41.885 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=1474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 job9: (groupid=0, jobs=1): err= 0: pid=964242: Tue Oct 1 01:44:19 2024 00:26:41.885 read: IOPS=137, BW=34.5MiB/s (36.2MB/s)(352MiB/10217msec) 00:26:41.885 slat (usec): min=13, max=1117.2k, avg=6261.57, stdev=42680.31 00:26:41.885 clat (msec): min=12, max=2001, avg=457.40, stdev=410.54 00:26:41.885 lat (msec): min=12, max=2001, avg=463.66, stdev=414.11 00:26:41.885 clat percentiles (msec): 00:26:41.885 | 1.00th=[ 24], 5.00th=[ 59], 10.00th=[ 94], 20.00th=[ 167], 00:26:41.885 | 30.00th=[ 284], 40.00th=[ 317], 50.00th=[ 355], 60.00th=[ 405], 00:26:41.885 | 70.00th=[ 435], 80.00th=[ 506], 90.00th=[ 978], 95.00th=[ 1485], 00:26:41.885 | 99.00th=[ 1854], 99.50th=[ 1871], 99.90th=[ 1905], 99.95th=[ 2005], 00:26:41.885 | 99.99th=[ 2005] 00:26:41.885 bw ( KiB/s): min= 2048, max=112640, per=6.55%, avg=36244.21, stdev=29571.46, samples=19 00:26:41.885 iops : min= 8, max= 440, avg=141.58, stdev=115.51, samples=19 00:26:41.885 lat (msec) : 20=0.85%, 50=2.48%, 100=6.96%, 250=14.48%, 500=54.51% 00:26:41.885 lat (msec) : 750=6.17%, 1000=5.25%, 2000=9.23%, >=2000=0.07% 00:26:41.885 cpu : usr=0.04%, sys=0.56%, ctx=181, majf=0, minf=4097 00:26:41.885 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=1409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 job10: (groupid=0, jobs=1): err= 0: pid=964245: Tue Oct 1 01:44:19 2024 00:26:41.885 read: IOPS=118, BW=29.6MiB/s (31.0MB/s)(302MiB/10228msec) 00:26:41.885 slat (usec): min=9, max=882703, avg=5847.07, stdev=47188.77 00:26:41.885 clat (msec): min=2, max=1780, avg=535.20, stdev=469.80 00:26:41.885 lat (msec): min=2, max=1780, avg=541.04, stdev=475.80 00:26:41.885 clat percentiles (msec): 00:26:41.885 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 47], 00:26:41.885 | 30.00th=[ 73], 40.00th=[ 326], 50.00th=[ 502], 60.00th=[ 676], 00:26:41.885 | 70.00th=[ 785], 80.00th=[ 1053], 90.00th=[ 1217], 95.00th=[ 1334], 00:26:41.885 | 99.00th=[ 1536], 99.50th=[ 1569], 99.90th=[ 1787], 99.95th=[ 1787], 00:26:41.885 | 99.99th=[ 1787] 00:26:41.885 bw ( KiB/s): min= 2048, max=151552, per=5.89%, avg=32569.61, stdev=33849.38, samples=18 00:26:41.885 iops : min= 8, max= 592, avg=127.22, stdev=132.23, samples=18 00:26:41.885 lat (msec) : 4=0.50%, 10=1.82%, 20=2.07%, 50=16.38%, 100=16.71% 00:26:41.885 lat (msec) : 250=2.23%, 500=10.34%, 750=15.96%, 1000=12.66%, 2000=21.34% 00:26:41.885 cpu : usr=0.08%, sys=0.44%, ctx=389, majf=0, minf=4097 00:26:41.885 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:26:41.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.885 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:41.885 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:41.885 00:26:41.885 Run status group 0 (all jobs): 00:26:41.885 READ: bw=540MiB/s (567MB/s), 23.8MiB/s-90.2MiB/s (25.0MB/s-94.6MB/s), io=5531MiB (5799MB), run=10161-10236msec 00:26:41.886 00:26:41.886 Disk stats (read/write): 00:26:41.886 nvme0n1: ios=6172/0, merge=0/0, ticks=1228574/0, in_queue=1228574, util=97.26% 00:26:41.886 nvme10n1: ios=4256/0, merge=0/0, ticks=1239978/0, in_queue=1239978, util=97.53% 00:26:41.886 nvme1n1: ios=2671/0, merge=0/0, ticks=1210957/0, in_queue=1210957, util=97.81% 00:26:41.886 nvme2n1: ios=4591/0, merge=0/0, ticks=1207724/0, in_queue=1207724, util=97.90% 00:26:41.886 nvme3n1: ios=1856/0, merge=0/0, ticks=1206644/0, in_queue=1206644, util=98.02% 00:26:41.886 nvme4n1: ios=7176/0, merge=0/0, ticks=1229355/0, in_queue=1229355, util=98.30% 00:26:41.886 nvme5n1: ios=5055/0, merge=0/0, ticks=1229321/0, in_queue=1229321, util=98.48% 00:26:41.886 nvme6n1: ios=3370/0, merge=0/0, ticks=1215165/0, in_queue=1215165, util=98.54% 00:26:41.886 nvme7n1: ios=2916/0, merge=0/0, ticks=1250671/0, in_queue=1250671, util=98.96% 00:26:41.886 nvme8n1: ios=2817/0, merge=0/0, ticks=1272530/0, in_queue=1272530, util=99.12% 00:26:41.886 nvme9n1: ios=2356/0, merge=0/0, ticks=1240195/0, in_queue=1240195, util=99.26% 00:26:41.886 01:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:41.886 [global] 00:26:41.886 thread=1 00:26:41.886 invalidate=1 00:26:41.886 rw=randwrite 00:26:41.886 time_based=1 00:26:41.886 runtime=10 00:26:41.886 ioengine=libaio 00:26:41.886 direct=1 00:26:41.886 bs=262144 00:26:41.886 iodepth=64 00:26:41.886 norandommap=1 00:26:41.886 numjobs=1 00:26:41.886 00:26:41.886 [job0] 00:26:41.886 filename=/dev/nvme0n1 00:26:41.886 [job1] 00:26:41.886 filename=/dev/nvme10n1 00:26:41.886 [job2] 00:26:41.886 filename=/dev/nvme1n1 00:26:41.886 [job3] 00:26:41.886 filename=/dev/nvme2n1 00:26:41.886 [job4] 00:26:41.886 filename=/dev/nvme3n1 00:26:41.886 [job5] 00:26:41.886 filename=/dev/nvme4n1 00:26:41.886 [job6] 00:26:41.886 filename=/dev/nvme5n1 00:26:41.886 [job7] 00:26:41.886 filename=/dev/nvme6n1 00:26:41.886 [job8] 00:26:41.886 filename=/dev/nvme7n1 00:26:41.886 [job9] 00:26:41.886 filename=/dev/nvme8n1 00:26:41.886 [job10] 00:26:41.886 filename=/dev/nvme9n1 00:26:41.886 Could not set queue depth (nvme0n1) 00:26:41.886 Could not set queue depth (nvme10n1) 00:26:41.886 Could not set queue depth (nvme1n1) 00:26:41.886 Could not set queue depth (nvme2n1) 00:26:41.886 Could not set queue depth (nvme3n1) 00:26:41.886 Could not set queue depth (nvme4n1) 00:26:41.886 Could not set queue depth (nvme5n1) 00:26:41.886 Could not set queue depth (nvme6n1) 00:26:41.886 Could not set queue depth (nvme7n1) 00:26:41.886 Could not set queue depth (nvme8n1) 00:26:41.886 Could not set queue depth (nvme9n1) 00:26:41.886 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:41.886 fio-3.35 00:26:41.886 Starting 11 threads 00:26:51.873 00:26:51.873 job0: (groupid=0, jobs=1): err= 0: pid=964821: Tue Oct 1 01:44:30 2024 00:26:51.873 write: IOPS=241, BW=60.4MiB/s (63.3MB/s)(625MiB/10353msec); 0 zone resets 00:26:51.873 slat (usec): min=15, max=164709, avg=2529.11, stdev=9518.44 00:26:51.873 clat (usec): min=1108, max=979987, avg=262170.71, stdev=216700.92 00:26:51.873 lat (usec): min=1133, max=980027, avg=264699.83, stdev=219163.14 00:26:51.873 clat percentiles (msec): 00:26:51.873 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 15], 20.00th=[ 37], 00:26:51.873 | 30.00th=[ 89], 40.00th=[ 176], 50.00th=[ 234], 60.00th=[ 288], 00:26:51.873 | 70.00th=[ 355], 80.00th=[ 447], 90.00th=[ 609], 95.00th=[ 659], 00:26:51.873 | 99.00th=[ 776], 99.50th=[ 877], 99.90th=[ 936], 99.95th=[ 978], 00:26:51.873 | 99.99th=[ 978] 00:26:51.873 bw ( KiB/s): min=22016, max=186880, per=7.34%, avg=62367.25, stdev=45848.03, samples=20 00:26:51.873 iops : min= 86, max= 730, avg=243.50, stdev=179.03, samples=20 00:26:51.873 lat (msec) : 2=0.32%, 4=0.96%, 10=4.44%, 20=7.96%, 50=11.88% 00:26:51.873 lat (msec) : 100=5.48%, 250=22.71%, 500=28.03%, 750=16.95%, 1000=1.28% 00:26:51.873 cpu : usr=0.72%, sys=0.78%, ctx=1763, majf=0, minf=1 00:26:51.873 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:51.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.873 issued rwts: total=0,2501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.873 job1: (groupid=0, jobs=1): err= 0: pid=964833: Tue Oct 1 01:44:30 2024 00:26:51.873 write: IOPS=309, BW=77.3MiB/s (81.1MB/s)(799MiB/10325msec); 0 zone resets 00:26:51.873 slat (usec): min=24, max=64509, avg=2307.87, stdev=6550.68 00:26:51.873 clat (usec): min=1129, max=906115, avg=204417.56, stdev=167294.84 00:26:51.873 lat (usec): min=1192, max=906153, avg=206725.43, stdev=169089.92 00:26:51.873 clat percentiles (msec): 00:26:51.873 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 68], 00:26:51.873 | 30.00th=[ 87], 40.00th=[ 150], 50.00th=[ 180], 60.00th=[ 209], 00:26:51.873 | 70.00th=[ 234], 80.00th=[ 300], 90.00th=[ 456], 95.00th=[ 592], 00:26:51.873 | 99.00th=[ 684], 99.50th=[ 785], 99.90th=[ 877], 99.95th=[ 911], 00:26:51.873 | 99.99th=[ 911] 00:26:51.874 bw ( KiB/s): min=26112, max=273408, per=9.44%, avg=80137.85, stdev=56354.60, samples=20 00:26:51.874 iops : min= 102, max= 1068, avg=312.90, stdev=220.18, samples=20 00:26:51.874 lat (msec) : 2=0.50%, 4=1.44%, 10=2.10%, 20=4.60%, 50=7.76% 00:26:51.874 lat (msec) : 100=16.00%, 250=41.20%, 500=18.38%, 750=7.33%, 1000=0.69% 00:26:51.874 cpu : usr=1.20%, sys=1.08%, ctx=1648, majf=0, minf=1 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,3194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job2: (groupid=0, jobs=1): err= 0: pid=964834: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=263, BW=65.9MiB/s (69.1MB/s)(682MiB/10354msec); 0 zone resets 00:26:51.874 slat (usec): min=20, max=237388, avg=3070.01, stdev=10509.73 00:26:51.874 clat (usec): min=1175, max=861480, avg=239427.82, stdev=198785.28 00:26:51.874 lat (usec): min=1222, max=861519, avg=242497.83, stdev=200890.49 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 79], 20.00th=[ 84], 00:26:51.874 | 30.00th=[ 100], 40.00th=[ 124], 50.00th=[ 159], 60.00th=[ 205], 00:26:51.874 | 70.00th=[ 275], 80.00th=[ 405], 90.00th=[ 558], 95.00th=[ 676], 00:26:51.874 | 99.00th=[ 827], 99.50th=[ 835], 99.90th=[ 860], 99.95th=[ 860], 00:26:51.874 | 99.99th=[ 860] 00:26:51.874 bw ( KiB/s): min=14848, max=194560, per=8.03%, avg=68219.35, stdev=47840.47, samples=20 00:26:51.874 iops : min= 58, max= 760, avg=266.35, stdev=186.83, samples=20 00:26:51.874 lat (msec) : 2=0.07%, 4=0.40%, 10=0.51%, 20=1.58%, 50=2.57% 00:26:51.874 lat (msec) : 100=25.98%, 250=35.54%, 500=18.98%, 750=12.28%, 1000=2.09% 00:26:51.874 cpu : usr=0.88%, sys=0.76%, ctx=1046, majf=0, minf=1 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,2729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job3: (groupid=0, jobs=1): err= 0: pid=964835: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=224, BW=56.2MiB/s (58.9MB/s)(582MiB/10359msec); 0 zone resets 00:26:51.874 slat (usec): min=17, max=173238, avg=2978.80, stdev=10442.70 00:26:51.874 clat (usec): min=1589, max=961854, avg=281410.43, stdev=208783.49 00:26:51.874 lat (msec): min=2, max=961, avg=284.39, stdev=211.30 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 27], 20.00th=[ 94], 00:26:51.874 | 30.00th=[ 148], 40.00th=[ 194], 50.00th=[ 234], 60.00th=[ 288], 00:26:51.874 | 70.00th=[ 355], 80.00th=[ 472], 90.00th=[ 609], 95.00th=[ 676], 00:26:51.874 | 99.00th=[ 760], 99.50th=[ 802], 99.90th=[ 927], 99.95th=[ 927], 00:26:51.874 | 99.99th=[ 961] 00:26:51.874 bw ( KiB/s): min=20480, max=181760, per=6.82%, avg=57964.75, stdev=36329.82, samples=20 00:26:51.874 iops : min= 80, max= 710, avg=226.30, stdev=141.93, samples=20 00:26:51.874 lat (msec) : 2=0.04%, 4=0.39%, 10=4.30%, 20=3.65%, 50=6.66% 00:26:51.874 lat (msec) : 100=5.93%, 250=33.33%, 500=26.59%, 750=17.44%, 1000=1.68% 00:26:51.874 cpu : usr=0.56%, sys=0.96%, ctx=1436, majf=0, minf=1 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,2328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job4: (groupid=0, jobs=1): err= 0: pid=964836: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=356, BW=89.2MiB/s (93.5MB/s)(925MiB/10372msec); 0 zone resets 00:26:51.874 slat (usec): min=20, max=79602, avg=2220.30, stdev=6433.52 00:26:51.874 clat (usec): min=1499, max=956246, avg=177041.47, stdev=160175.30 00:26:51.874 lat (usec): min=1537, max=956305, avg=179261.77, stdev=161673.68 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 37], 20.00th=[ 51], 00:26:51.874 | 30.00th=[ 82], 40.00th=[ 95], 50.00th=[ 126], 60.00th=[ 157], 00:26:51.874 | 70.00th=[ 199], 80.00th=[ 279], 90.00th=[ 388], 95.00th=[ 558], 00:26:51.874 | 99.00th=[ 676], 99.50th=[ 751], 99.90th=[ 953], 99.95th=[ 953], 00:26:51.874 | 99.99th=[ 953] 00:26:51.874 bw ( KiB/s): min=24576, max=265728, per=10.95%, avg=93046.25, stdev=61874.88, samples=20 00:26:51.874 iops : min= 96, max= 1038, avg=363.30, stdev=241.67, samples=20 00:26:51.874 lat (msec) : 2=0.03%, 4=0.32%, 10=2.78%, 20=2.62%, 50=13.22% 00:26:51.874 lat (msec) : 100=24.22%, 250=34.27%, 500=15.54%, 750=6.49%, 1000=0.51% 00:26:51.874 cpu : usr=1.18%, sys=1.20%, ctx=1608, majf=0, minf=1 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,3700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job5: (groupid=0, jobs=1): err= 0: pid=964839: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=256, BW=64.0MiB/s (67.1MB/s)(664MiB/10371msec); 0 zone resets 00:26:51.874 slat (usec): min=17, max=215277, avg=1864.03, stdev=8861.71 00:26:51.874 clat (usec): min=695, max=928920, avg=247913.53, stdev=206767.89 00:26:51.874 lat (usec): min=734, max=964169, avg=249777.56, stdev=209091.90 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 47], 00:26:51.874 | 30.00th=[ 101], 40.00th=[ 146], 50.00th=[ 192], 60.00th=[ 245], 00:26:51.874 | 70.00th=[ 347], 80.00th=[ 464], 90.00th=[ 567], 95.00th=[ 609], 00:26:51.874 | 99.00th=[ 760], 99.50th=[ 844], 99.90th=[ 927], 99.95th=[ 927], 00:26:51.874 | 99.99th=[ 927] 00:26:51.874 bw ( KiB/s): min=26624, max=170496, per=7.81%, avg=66345.55, stdev=37068.21, samples=20 00:26:51.874 iops : min= 104, max= 666, avg=259.10, stdev=144.84, samples=20 00:26:51.874 lat (usec) : 750=0.08%, 1000=0.19% 00:26:51.874 lat (msec) : 2=0.64%, 4=0.11%, 10=3.65%, 20=6.06%, 50=9.94% 00:26:51.874 lat (msec) : 100=9.45%, 250=30.66%, 500=21.85%, 750=16.27%, 1000=1.09% 00:26:51.874 cpu : usr=0.93%, sys=0.80%, ctx=2062, majf=0, minf=1 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,2655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job6: (groupid=0, jobs=1): err= 0: pid=964840: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=415, BW=104MiB/s (109MB/s)(1074MiB/10339msec); 0 zone resets 00:26:51.874 slat (usec): min=17, max=184872, avg=1558.27, stdev=5966.44 00:26:51.874 clat (usec): min=855, max=856003, avg=152322.39, stdev=162235.22 00:26:51.874 lat (usec): min=883, max=856029, avg=153880.66, stdev=163938.49 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 44], 00:26:51.874 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 108], 00:26:51.874 | 70.00th=[ 167], 80.00th=[ 271], 90.00th=[ 418], 95.00th=[ 510], 00:26:51.874 | 99.00th=[ 651], 99.50th=[ 684], 99.90th=[ 835], 99.95th=[ 835], 00:26:51.874 | 99.99th=[ 860] 00:26:51.874 bw ( KiB/s): min=34304, max=342016, per=12.75%, avg=108303.40, stdev=96380.99, samples=20 00:26:51.874 iops : min= 134, max= 1336, avg=422.95, stdev=376.51, samples=20 00:26:51.874 lat (usec) : 1000=0.12% 00:26:51.874 lat (msec) : 2=0.42%, 4=0.91%, 10=2.05%, 20=2.84%, 50=34.58% 00:26:51.874 lat (msec) : 100=17.65%, 250=18.14%, 500=17.83%, 750=5.17%, 1000=0.30% 00:26:51.874 cpu : usr=1.25%, sys=1.44%, ctx=2155, majf=0, minf=1 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,4295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job7: (groupid=0, jobs=1): err= 0: pid=964841: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=409, BW=102MiB/s (107MB/s)(1031MiB/10059msec); 0 zone resets 00:26:51.874 slat (usec): min=23, max=81210, avg=1982.02, stdev=5209.97 00:26:51.874 clat (usec): min=1139, max=636211, avg=154144.67, stdev=119782.80 00:26:51.874 lat (usec): min=1209, max=645151, avg=156126.69, stdev=120869.21 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 79], 00:26:51.874 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 108], 60.00th=[ 129], 00:26:51.874 | 70.00th=[ 182], 80.00th=[ 241], 90.00th=[ 292], 95.00th=[ 414], 00:26:51.874 | 99.00th=[ 592], 99.50th=[ 609], 99.90th=[ 617], 99.95th=[ 625], 00:26:51.874 | 99.99th=[ 634] 00:26:51.874 bw ( KiB/s): min=27081, max=221696, per=12.23%, avg=103883.00, stdev=62333.83, samples=20 00:26:51.874 iops : min= 105, max= 866, avg=405.65, stdev=243.47, samples=20 00:26:51.874 lat (msec) : 2=0.27%, 4=0.39%, 10=1.09%, 20=0.90%, 50=9.87% 00:26:51.874 lat (msec) : 100=33.84%, 250=35.20%, 500=15.82%, 750=2.62% 00:26:51.874 cpu : usr=1.13%, sys=1.46%, ctx=1565, majf=0, minf=2 00:26:51.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:51.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.874 issued rwts: total=0,4122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.874 job8: (groupid=0, jobs=1): err= 0: pid=964844: Tue Oct 1 01:44:30 2024 00:26:51.874 write: IOPS=281, BW=70.3MiB/s (73.8MB/s)(729MiB/10359msec); 0 zone resets 00:26:51.874 slat (usec): min=18, max=219860, avg=2980.47, stdev=10418.65 00:26:51.874 clat (usec): min=1171, max=945827, avg=224280.06, stdev=206782.57 00:26:51.874 lat (usec): min=1212, max=945872, avg=227260.53, stdev=209406.04 00:26:51.874 clat percentiles (msec): 00:26:51.874 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 50], 00:26:51.874 | 30.00th=[ 58], 40.00th=[ 116], 50.00th=[ 155], 60.00th=[ 211], 00:26:51.874 | 70.00th=[ 275], 80.00th=[ 368], 90.00th=[ 584], 95.00th=[ 659], 00:26:51.874 | 99.00th=[ 818], 99.50th=[ 827], 99.90th=[ 911], 99.95th=[ 944], 00:26:51.875 | 99.99th=[ 944] 00:26:51.875 bw ( KiB/s): min=16384, max=293888, per=8.59%, avg=72962.15, stdev=65173.70, samples=20 00:26:51.875 iops : min= 64, max= 1148, avg=284.80, stdev=254.51, samples=20 00:26:51.875 lat (msec) : 2=0.17%, 4=0.65%, 10=0.75%, 20=1.17%, 50=20.41% 00:26:51.875 lat (msec) : 100=14.27%, 250=29.06%, 500=19.90%, 750=10.70%, 1000=2.92% 00:26:51.875 cpu : usr=0.98%, sys=0.98%, ctx=1209, majf=0, minf=2 00:26:51.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.875 issued rwts: total=0,2915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.875 job9: (groupid=0, jobs=1): err= 0: pid=964845: Tue Oct 1 01:44:30 2024 00:26:51.875 write: IOPS=216, BW=54.1MiB/s (56.7MB/s)(561MiB/10367msec); 0 zone resets 00:26:51.875 slat (usec): min=22, max=258865, avg=2863.55, stdev=10834.41 00:26:51.875 clat (usec): min=1401, max=1014.8k, avg=292689.93, stdev=223503.35 00:26:51.875 lat (usec): min=1446, max=1014.9k, avg=295553.48, stdev=225335.63 00:26:51.875 clat percentiles (msec): 00:26:51.875 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 28], 20.00th=[ 71], 00:26:51.875 | 30.00th=[ 120], 40.00th=[ 176], 50.00th=[ 249], 60.00th=[ 342], 00:26:51.875 | 70.00th=[ 422], 80.00th=[ 527], 90.00th=[ 600], 95.00th=[ 651], 00:26:51.875 | 99.00th=[ 944], 99.50th=[ 995], 99.90th=[ 1011], 99.95th=[ 1011], 00:26:51.875 | 99.99th=[ 1011] 00:26:51.875 bw ( KiB/s): min=26624, max=130048, per=6.57%, avg=55798.10, stdev=30829.76, samples=20 00:26:51.875 iops : min= 104, max= 508, avg=217.80, stdev=120.49, samples=20 00:26:51.875 lat (msec) : 2=0.09%, 4=1.83%, 10=2.94%, 20=2.94%, 50=8.11% 00:26:51.875 lat (msec) : 100=11.86%, 250=22.43%, 500=27.02%, 750=21.58%, 1000=0.80% 00:26:51.875 lat (msec) : 2000=0.40% 00:26:51.875 cpu : usr=0.68%, sys=0.96%, ctx=1490, majf=0, minf=1 00:26:51.875 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.875 issued rwts: total=0,2243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.875 job10: (groupid=0, jobs=1): err= 0: pid=964846: Tue Oct 1 01:44:30 2024 00:26:51.875 write: IOPS=360, BW=90.1MiB/s (94.4MB/s)(933MiB/10354msec); 0 zone resets 00:26:51.875 slat (usec): min=19, max=86388, avg=2386.88, stdev=7051.40 00:26:51.875 clat (usec): min=1823, max=735527, avg=175125.13, stdev=169405.85 00:26:51.875 lat (usec): min=1909, max=735597, avg=177512.01, stdev=171462.17 00:26:51.875 clat percentiles (msec): 00:26:51.875 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 42], 20.00th=[ 45], 00:26:51.875 | 30.00th=[ 62], 40.00th=[ 77], 50.00th=[ 100], 60.00th=[ 155], 00:26:51.875 | 70.00th=[ 215], 80.00th=[ 288], 90.00th=[ 388], 95.00th=[ 625], 00:26:51.875 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 735], 99.95th=[ 735], 00:26:51.875 | 99.99th=[ 735] 00:26:51.875 bw ( KiB/s): min=20480, max=279040, per=11.05%, avg=93858.40, stdev=82145.47, samples=20 00:26:51.875 iops : min= 80, max= 1090, avg=366.55, stdev=320.92, samples=20 00:26:51.875 lat (msec) : 2=0.05%, 4=0.21%, 10=0.78%, 20=3.14%, 50=21.31% 00:26:51.875 lat (msec) : 100=24.69%, 250=25.84%, 500=16.46%, 750=7.51% 00:26:51.875 cpu : usr=1.06%, sys=1.34%, ctx=1508, majf=0, minf=1 00:26:51.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:51.875 issued rwts: total=0,3730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:51.875 00:26:51.875 Run status group 0 (all jobs): 00:26:51.875 WRITE: bw=829MiB/s (870MB/s), 54.1MiB/s-104MiB/s (56.7MB/s-109MB/s), io=8603MiB (9021MB), run=10059-10372msec 00:26:51.875 00:26:51.875 Disk stats (read/write): 00:26:51.875 nvme0n1: ios=49/4916, merge=0/0, ticks=327/1223156, in_queue=1223483, util=99.90% 00:26:51.875 nvme10n1: ios=41/6319, merge=0/0, ticks=43/1218437, in_queue=1218480, util=97.45% 00:26:51.875 nvme1n1: ios=44/5377, merge=0/0, ticks=1771/1203370, in_queue=1205141, util=99.90% 00:26:51.875 nvme2n1: ios=48/4574, merge=0/0, ticks=2696/1220727, in_queue=1223423, util=99.90% 00:26:51.875 nvme3n1: ios=46/7312, merge=0/0, ticks=283/1216663, in_queue=1216946, util=100.00% 00:26:51.875 nvme4n1: ios=25/5205, merge=0/0, ticks=22/1213791, in_queue=1213813, util=98.17% 00:26:51.875 nvme5n1: ios=46/8518, merge=0/0, ticks=1644/1223120, in_queue=1224764, util=99.95% 00:26:51.875 nvme6n1: ios=0/7987, merge=0/0, ticks=0/1213973, in_queue=1213973, util=98.35% 00:26:51.875 nvme7n1: ios=0/5746, merge=0/0, ticks=0/1209089, in_queue=1209089, util=98.80% 00:26:51.875 nvme8n1: ios=0/4411, merge=0/0, ticks=0/1230366, in_queue=1230366, util=99.00% 00:26:51.875 nvme9n1: ios=0/7374, merge=0/0, ticks=0/1206761, in_queue=1206761, util=99.05% 00:26:51.875 01:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:51.875 01:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:51.875 01:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.875 01:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:51.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.875 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:52.134 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.134 01:44:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:52.393 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.393 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:52.651 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:52.652 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.652 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:52.912 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:52.912 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.913 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:52.913 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.913 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.913 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.913 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.913 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:53.172 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.172 01:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:53.431 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:53.431 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.431 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:53.690 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:53.690 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.690 rmmod nvme_tcp 00:26:53.690 rmmod nvme_fabrics 00:26:53.690 rmmod nvme_keyring 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 959990 ']' 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 959990 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 959990 ']' 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 959990 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 959990 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 959990' 00:26:53.690 killing process with pid 959990 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 959990 00:26:53.690 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 959990 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.258 01:44:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.173 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.433 00:26:56.433 real 1m1.195s 00:26:56.433 user 3m34.865s 00:26:56.433 sys 0m15.529s 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:56.433 ************************************ 00:26:56.433 END TEST nvmf_multiconnection 00:26:56.433 ************************************ 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:56.433 ************************************ 00:26:56.433 START TEST nvmf_initiator_timeout 00:26:56.433 ************************************ 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:56.433 * Looking for test storage... 00:26:56.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:56.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.433 --rc genhtml_branch_coverage=1 00:26:56.433 --rc genhtml_function_coverage=1 00:26:56.433 --rc genhtml_legend=1 00:26:56.433 --rc geninfo_all_blocks=1 00:26:56.433 --rc geninfo_unexecuted_blocks=1 00:26:56.433 00:26:56.433 ' 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:56.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.433 --rc genhtml_branch_coverage=1 00:26:56.433 --rc genhtml_function_coverage=1 00:26:56.433 --rc genhtml_legend=1 00:26:56.433 --rc geninfo_all_blocks=1 00:26:56.433 --rc geninfo_unexecuted_blocks=1 00:26:56.433 00:26:56.433 ' 00:26:56.433 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:56.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.433 --rc genhtml_branch_coverage=1 00:26:56.433 --rc genhtml_function_coverage=1 00:26:56.433 --rc genhtml_legend=1 00:26:56.433 --rc geninfo_all_blocks=1 00:26:56.434 --rc geninfo_unexecuted_blocks=1 00:26:56.434 00:26:56.434 ' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:56.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.434 --rc genhtml_branch_coverage=1 00:26:56.434 --rc genhtml_function_coverage=1 00:26:56.434 --rc genhtml_legend=1 00:26:56.434 --rc geninfo_all_blocks=1 00:26:56.434 --rc geninfo_unexecuted_blocks=1 00:26:56.434 00:26:56.434 ' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.434 01:44:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:58.975 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:58.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:58.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:58.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:58.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:26:58.976 00:26:58.976 --- 10.0.0.2 ping statistics --- 00:26:58.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.976 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:26:58.976 00:26:58.976 --- 10.0.0.1 ping statistics --- 00:26:58.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.976 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=968029 00:26:58.976 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 968029 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 968029 ']' 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.977 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.977 [2024-10-01 01:44:38.564582] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:26:58.977 [2024-10-01 01:44:38.564679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.977 [2024-10-01 01:44:38.636063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.977 [2024-10-01 01:44:38.727105] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.977 [2024-10-01 01:44:38.727165] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.977 [2024-10-01 01:44:38.727182] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.977 [2024-10-01 01:44:38.727196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.977 [2024-10-01 01:44:38.727208] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.977 [2024-10-01 01:44:38.727277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.977 [2024-10-01 01:44:38.727335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.977 [2024-10-01 01:44:38.727452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.977 [2024-10-01 01:44:38.727454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 Malloc0 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 Delay0 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 [2024-10-01 01:44:38.915661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.235 [2024-10-01 01:44:38.943945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.235 01:44:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:59.800 01:44:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:59.800 01:44:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:59.800 01:44:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.800 01:44:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:59.800 01:44:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=968455 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:02.333 01:44:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:02.333 [global] 00:27:02.333 thread=1 00:27:02.333 invalidate=1 00:27:02.333 rw=write 00:27:02.333 time_based=1 00:27:02.333 runtime=60 00:27:02.333 ioengine=libaio 00:27:02.333 direct=1 00:27:02.333 bs=4096 00:27:02.333 iodepth=1 00:27:02.333 norandommap=0 00:27:02.333 numjobs=1 00:27:02.333 00:27:02.333 verify_dump=1 00:27:02.333 verify_backlog=512 00:27:02.333 verify_state_save=0 00:27:02.333 do_verify=1 00:27:02.333 verify=crc32c-intel 00:27:02.333 [job0] 00:27:02.333 filename=/dev/nvme0n1 00:27:02.333 Could not set queue depth (nvme0n1) 00:27:02.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:02.333 fio-3.35 00:27:02.333 Starting 1 thread 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 true 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 true 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 true 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 true 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.864 01:44:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.152 true 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.152 true 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.152 true 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:08.152 true 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:08.152 01:44:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 968455 00:28:04.473 00:28:04.473 job0: (groupid=0, jobs=1): err= 0: pid=968524: Tue Oct 1 01:45:42 2024 00:28:04.473 read: IOPS=31, BW=127KiB/s (130kB/s)(7616KiB/60034msec) 00:28:04.473 slat (usec): min=7, max=15560, avg=29.61, stdev=429.46 00:28:04.473 clat (usec): min=285, max=41211k, avg=31171.53, stdev=944397.20 00:28:04.473 lat (usec): min=293, max=41211k, avg=31201.14, stdev=944396.93 00:28:04.473 clat percentiles (usec): 00:28:04.473 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 00:28:04.473 | 20.00th=[ 326], 30.00th=[ 334], 40.00th=[ 343], 00:28:04.473 | 50.00th=[ 355], 60.00th=[ 375], 70.00th=[ 400], 00:28:04.473 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:04.473 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:28:04.473 | 99.95th=[17112761], 99.99th=[17112761] 00:28:04.473 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60034msec); 0 zone resets 00:28:04.473 slat (usec): min=8, max=30739, avg=35.01, stdev=678.88 00:28:04.474 clat (usec): min=192, max=476, avg=261.27, stdev=54.28 00:28:04.474 lat (usec): min=202, max=31100, avg=296.28, stdev=683.78 00:28:04.474 clat percentiles (usec): 00:28:04.474 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:28:04.474 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 258], 00:28:04.474 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 383], 00:28:04.474 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 457], 99.95th=[ 465], 00:28:04.474 | 99.99th=[ 478] 00:28:04.474 bw ( KiB/s): min= 4096, max= 6888, per=100.00%, avg=5461.33, stdev=1397.01, samples=3 00:28:04.474 iops : min= 1024, max= 1722, avg=1365.33, stdev=349.25, samples=3 00:28:04.474 lat (usec) : 250=28.21%, 500=59.92%, 750=0.99% 00:28:04.474 lat (msec) : 50=10.86%, >=2000=0.03% 00:28:04.474 cpu : usr=0.09%, sys=0.16%, ctx=3956, majf=0, minf=1 00:28:04.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.474 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:04.474 00:28:04.474 Run status group 0 (all jobs): 00:28:04.474 READ: bw=127KiB/s (130kB/s), 127KiB/s-127KiB/s (130kB/s-130kB/s), io=7616KiB (7799kB), run=60034-60034msec 00:28:04.474 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60034-60034msec 00:28:04.474 00:28:04.474 Disk stats (read/write): 00:28:04.474 nvme0n1: ios=1952/2048, merge=0/0, ticks=19320/509, in_queue=19829, util=99.96% 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:04.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:04.474 nvmf hotplug test: fio successful as expected 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.474 rmmod nvme_tcp 00:28:04.474 rmmod nvme_fabrics 00:28:04.474 rmmod nvme_keyring 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 968029 ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 968029 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 968029 ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 968029 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 968029 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 968029' 00:28:04.474 killing process with pid 968029 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 968029 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 968029 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.474 01:45:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.733 01:45:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.733 00:28:04.733 real 1m8.488s 00:28:04.733 user 4m11.219s 00:28:04.733 sys 0m6.490s 00:28:04.733 01:45:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.733 01:45:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:04.733 ************************************ 00:28:04.733 END TEST nvmf_initiator_timeout 00:28:04.733 ************************************ 00:28:04.992 01:45:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:04.992 01:45:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:04.992 01:45:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:04.992 01:45:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.992 01:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.895 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:06.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:06.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:06.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:06.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.896 ************************************ 00:28:06.896 START TEST nvmf_perf_adq 00:28:06.896 ************************************ 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:06.896 * Looking for test storage... 00:28:06.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:28:06.896 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:07.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.156 --rc genhtml_branch_coverage=1 00:28:07.156 --rc genhtml_function_coverage=1 00:28:07.156 --rc genhtml_legend=1 00:28:07.156 --rc geninfo_all_blocks=1 00:28:07.156 --rc geninfo_unexecuted_blocks=1 00:28:07.156 00:28:07.156 ' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:07.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.156 --rc genhtml_branch_coverage=1 00:28:07.156 --rc genhtml_function_coverage=1 00:28:07.156 --rc genhtml_legend=1 00:28:07.156 --rc geninfo_all_blocks=1 00:28:07.156 --rc geninfo_unexecuted_blocks=1 00:28:07.156 00:28:07.156 ' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:07.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.156 --rc genhtml_branch_coverage=1 00:28:07.156 --rc genhtml_function_coverage=1 00:28:07.156 --rc genhtml_legend=1 00:28:07.156 --rc geninfo_all_blocks=1 00:28:07.156 --rc geninfo_unexecuted_blocks=1 00:28:07.156 00:28:07.156 ' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:07.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.156 --rc genhtml_branch_coverage=1 00:28:07.156 --rc genhtml_function_coverage=1 00:28:07.156 --rc genhtml_legend=1 00:28:07.156 --rc geninfo_all_blocks=1 00:28:07.156 --rc geninfo_unexecuted_blocks=1 00:28:07.156 00:28:07.156 ' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.156 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.157 01:45:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.080 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:09.081 01:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:09.649 01:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:12.182 01:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.454 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:28:17.455 00:28:17.455 --- 10.0.0.2 ping statistics --- 00:28:17.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.455 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:17.455 00:28:17.455 --- 10.0.0.1 ping statistics --- 00:28:17.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.455 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:17.455 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=980687 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 980687 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 980687 ']' 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.456 01:45:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.456 [2024-10-01 01:45:56.904069] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:28:17.456 [2024-10-01 01:45:56.904155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.456 [2024-10-01 01:45:56.976201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.456 [2024-10-01 01:45:57.066267] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.456 [2024-10-01 01:45:57.066354] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.456 [2024-10-01 01:45:57.066368] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.456 [2024-10-01 01:45:57.066379] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.456 [2024-10-01 01:45:57.066388] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.456 [2024-10-01 01:45:57.066491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.456 [2024-10-01 01:45:57.066531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.456 [2024-10-01 01:45:57.066610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.456 [2024-10-01 01:45:57.066613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.456 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.716 [2024-10-01 01:45:57.336460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.716 Malloc1 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.716 [2024-10-01 01:45:57.389554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=980840 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:17.716 01:45:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:19.621 "tick_rate": 2700000000, 00:28:19.621 "poll_groups": [ 00:28:19.621 { 00:28:19.621 "name": "nvmf_tgt_poll_group_000", 00:28:19.621 "admin_qpairs": 1, 00:28:19.621 "io_qpairs": 1, 00:28:19.621 "current_admin_qpairs": 1, 00:28:19.621 "current_io_qpairs": 1, 00:28:19.621 "pending_bdev_io": 0, 00:28:19.621 "completed_nvme_io": 20370, 00:28:19.621 "transports": [ 00:28:19.621 { 00:28:19.621 "trtype": "TCP" 00:28:19.621 } 00:28:19.621 ] 00:28:19.621 }, 00:28:19.621 { 00:28:19.621 "name": "nvmf_tgt_poll_group_001", 00:28:19.621 "admin_qpairs": 0, 00:28:19.621 "io_qpairs": 1, 00:28:19.621 "current_admin_qpairs": 0, 00:28:19.621 "current_io_qpairs": 1, 00:28:19.621 "pending_bdev_io": 0, 00:28:19.621 "completed_nvme_io": 19969, 00:28:19.621 "transports": [ 00:28:19.621 { 00:28:19.621 "trtype": "TCP" 00:28:19.621 } 00:28:19.621 ] 00:28:19.621 }, 00:28:19.621 { 00:28:19.621 "name": "nvmf_tgt_poll_group_002", 00:28:19.621 "admin_qpairs": 0, 00:28:19.621 "io_qpairs": 1, 00:28:19.621 "current_admin_qpairs": 0, 00:28:19.621 "current_io_qpairs": 1, 00:28:19.621 "pending_bdev_io": 0, 00:28:19.621 "completed_nvme_io": 18256, 00:28:19.621 "transports": [ 00:28:19.621 { 00:28:19.621 "trtype": "TCP" 00:28:19.621 } 00:28:19.621 ] 00:28:19.621 }, 00:28:19.621 { 00:28:19.621 "name": "nvmf_tgt_poll_group_003", 00:28:19.621 "admin_qpairs": 0, 00:28:19.621 "io_qpairs": 1, 00:28:19.621 "current_admin_qpairs": 0, 00:28:19.621 "current_io_qpairs": 1, 00:28:19.621 "pending_bdev_io": 0, 00:28:19.621 "completed_nvme_io": 20114, 00:28:19.621 "transports": [ 00:28:19.621 { 00:28:19.621 "trtype": "TCP" 00:28:19.621 } 00:28:19.621 ] 00:28:19.621 } 00:28:19.621 ] 00:28:19.621 }' 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:19.621 01:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 980840 00:28:27.814 Initializing NVMe Controllers 00:28:27.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:27.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:27.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:27.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:27.814 Initialization complete. Launching workers. 00:28:27.814 ======================================================== 00:28:27.814 Latency(us) 00:28:27.814 Device Information : IOPS MiB/s Average min max 00:28:27.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10610.20 41.45 6033.41 2569.98 7964.24 00:28:27.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10583.00 41.34 6047.34 3292.12 7737.28 00:28:27.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9541.40 37.27 6708.70 2753.51 10352.97 00:28:27.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10758.80 42.03 5948.89 2359.71 7544.21 00:28:27.814 ======================================================== 00:28:27.814 Total : 41493.38 162.08 6170.33 2359.71 10352.97 00:28:27.814 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.814 rmmod nvme_tcp 00:28:27.814 rmmod nvme_fabrics 00:28:27.814 rmmod nvme_keyring 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 980687 ']' 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 980687 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 980687 ']' 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 980687 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 980687 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 980687' 00:28:27.814 killing process with pid 980687 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 980687 00:28:27.814 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 980687 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.073 01:46:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.613 01:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:30.613 01:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:30.613 01:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:30.613 01:46:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:30.872 01:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:33.406 01:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.680 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.681 01:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:28:38.681 00:28:38.681 --- 10.0.0.2 ping statistics --- 00:28:38.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.681 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:28:38.681 00:28:38.681 --- 10.0.0.1 ping statistics --- 00:28:38.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.681 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:38.681 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:38.682 net.core.busy_poll = 1 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:38.682 net.core.busy_read = 1 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=983453 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 983453 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 983453 ']' 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.682 [2024-10-01 01:46:18.273036] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:28:38.682 [2024-10-01 01:46:18.273136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.682 [2024-10-01 01:46:18.342576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.682 [2024-10-01 01:46:18.432614] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.682 [2024-10-01 01:46:18.432686] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.682 [2024-10-01 01:46:18.432700] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.682 [2024-10-01 01:46:18.432711] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.682 [2024-10-01 01:46:18.432721] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.682 [2024-10-01 01:46:18.432806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.682 [2024-10-01 01:46:18.432890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.682 [2024-10-01 01:46:18.432893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.682 [2024-10-01 01:46:18.432830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.682 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 [2024-10-01 01:46:18.706419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 Malloc1 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 [2024-10-01 01:46:18.757713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=983606 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:38.941 01:46:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:41.477 "tick_rate": 2700000000, 00:28:41.477 "poll_groups": [ 00:28:41.477 { 00:28:41.477 "name": "nvmf_tgt_poll_group_000", 00:28:41.477 "admin_qpairs": 1, 00:28:41.477 "io_qpairs": 3, 00:28:41.477 "current_admin_qpairs": 1, 00:28:41.477 "current_io_qpairs": 3, 00:28:41.477 "pending_bdev_io": 0, 00:28:41.477 "completed_nvme_io": 26992, 00:28:41.477 "transports": [ 00:28:41.477 { 00:28:41.477 "trtype": "TCP" 00:28:41.477 } 00:28:41.477 ] 00:28:41.477 }, 00:28:41.477 { 00:28:41.477 "name": "nvmf_tgt_poll_group_001", 00:28:41.477 "admin_qpairs": 0, 00:28:41.477 "io_qpairs": 1, 00:28:41.477 "current_admin_qpairs": 0, 00:28:41.477 "current_io_qpairs": 1, 00:28:41.477 "pending_bdev_io": 0, 00:28:41.477 "completed_nvme_io": 23346, 00:28:41.477 "transports": [ 00:28:41.477 { 00:28:41.477 "trtype": "TCP" 00:28:41.477 } 00:28:41.477 ] 00:28:41.477 }, 00:28:41.477 { 00:28:41.477 "name": "nvmf_tgt_poll_group_002", 00:28:41.477 "admin_qpairs": 0, 00:28:41.477 "io_qpairs": 0, 00:28:41.477 "current_admin_qpairs": 0, 00:28:41.477 "current_io_qpairs": 0, 00:28:41.477 "pending_bdev_io": 0, 00:28:41.477 "completed_nvme_io": 0, 00:28:41.477 "transports": [ 00:28:41.477 { 00:28:41.477 "trtype": "TCP" 00:28:41.477 } 00:28:41.477 ] 00:28:41.477 }, 00:28:41.477 { 00:28:41.477 "name": "nvmf_tgt_poll_group_003", 00:28:41.477 "admin_qpairs": 0, 00:28:41.477 "io_qpairs": 0, 00:28:41.477 "current_admin_qpairs": 0, 00:28:41.477 "current_io_qpairs": 0, 00:28:41.477 "pending_bdev_io": 0, 00:28:41.477 "completed_nvme_io": 0, 00:28:41.477 "transports": [ 00:28:41.477 { 00:28:41.477 "trtype": "TCP" 00:28:41.477 } 00:28:41.477 ] 00:28:41.477 } 00:28:41.477 ] 00:28:41.477 }' 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:41.477 01:46:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 983606 00:28:49.600 Initializing NVMe Controllers 00:28:49.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:49.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:49.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:49.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:49.600 Initialization complete. Launching workers. 00:28:49.600 ======================================================== 00:28:49.600 Latency(us) 00:28:49.600 Device Information : IOPS MiB/s Average min max 00:28:49.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4858.20 18.98 13188.11 1823.57 61281.67 00:28:49.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4721.70 18.44 13555.25 2154.54 62549.42 00:28:49.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12269.60 47.93 5216.86 1584.87 7858.70 00:28:49.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4600.70 17.97 13915.65 2128.81 60720.45 00:28:49.600 ======================================================== 00:28:49.600 Total : 26450.19 103.32 9682.53 1584.87 62549.42 00:28:49.600 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.600 rmmod nvme_tcp 00:28:49.600 rmmod nvme_fabrics 00:28:49.600 rmmod nvme_keyring 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 983453 ']' 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 983453 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 983453 ']' 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 983453 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.600 01:46:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 983453 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 983453' 00:28:49.600 killing process with pid 983453 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 983453 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 983453 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.600 01:46:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:52.896 00:28:52.896 real 0m45.689s 00:28:52.896 user 2m38.005s 00:28:52.896 sys 0m10.202s 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.896 ************************************ 00:28:52.896 END TEST nvmf_perf_adq 00:28:52.896 ************************************ 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:52.896 ************************************ 00:28:52.896 START TEST nvmf_shutdown 00:28:52.896 ************************************ 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:52.896 * Looking for test storage... 00:28:52.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.896 --rc genhtml_branch_coverage=1 00:28:52.896 --rc genhtml_function_coverage=1 00:28:52.896 --rc genhtml_legend=1 00:28:52.896 --rc geninfo_all_blocks=1 00:28:52.896 --rc geninfo_unexecuted_blocks=1 00:28:52.896 00:28:52.896 ' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.896 --rc genhtml_branch_coverage=1 00:28:52.896 --rc genhtml_function_coverage=1 00:28:52.896 --rc genhtml_legend=1 00:28:52.896 --rc geninfo_all_blocks=1 00:28:52.896 --rc geninfo_unexecuted_blocks=1 00:28:52.896 00:28:52.896 ' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.896 --rc genhtml_branch_coverage=1 00:28:52.896 --rc genhtml_function_coverage=1 00:28:52.896 --rc genhtml_legend=1 00:28:52.896 --rc geninfo_all_blocks=1 00:28:52.896 --rc geninfo_unexecuted_blocks=1 00:28:52.896 00:28:52.896 ' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:52.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.896 --rc genhtml_branch_coverage=1 00:28:52.896 --rc genhtml_function_coverage=1 00:28:52.896 --rc genhtml_legend=1 00:28:52.896 --rc geninfo_all_blocks=1 00:28:52.896 --rc geninfo_unexecuted_blocks=1 00:28:52.896 00:28:52.896 ' 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.896 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:52.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:52.897 ************************************ 00:28:52.897 START TEST nvmf_shutdown_tc1 00:28:52.897 ************************************ 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.897 01:46:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.803 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.804 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.804 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.804 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:28:55.064 00:28:55.064 --- 10.0.0.2 ping statistics --- 00:28:55.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.064 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:28:55.064 00:28:55.064 --- 10.0.0.1 ping statistics --- 00:28:55.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.064 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=986901 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 986901 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 986901 ']' 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.064 01:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.064 [2024-10-01 01:46:34.838365] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:28:55.064 [2024-10-01 01:46:34.838459] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.064 [2024-10-01 01:46:34.906739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.323 [2024-10-01 01:46:34.996578] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.323 [2024-10-01 01:46:34.996631] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.323 [2024-10-01 01:46:34.996655] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.323 [2024-10-01 01:46:34.996666] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.323 [2024-10-01 01:46:34.996676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.323 [2024-10-01 01:46:34.996740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.323 [2024-10-01 01:46:34.996800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.323 [2024-10-01 01:46:34.996869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.323 [2024-10-01 01:46:34.996871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.323 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.323 [2024-10-01 01:46:35.169807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.584 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.584 Malloc1 00:28:55.584 [2024-10-01 01:46:35.259520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.584 Malloc2 00:28:55.584 Malloc3 00:28:55.584 Malloc4 00:28:55.584 Malloc5 00:28:55.843 Malloc6 00:28:55.843 Malloc7 00:28:55.843 Malloc8 00:28:55.843 Malloc9 00:28:55.843 Malloc10 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=986975 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 986975 /var/tmp/bdevperf.sock 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 986975 ']' 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:56.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.103 "hdgst": ${hdgst:-false}, 00:28:56.103 "ddgst": ${ddgst:-false} 00:28:56.103 }, 00:28:56.103 "method": "bdev_nvme_attach_controller" 00:28:56.103 } 00:28:56.103 EOF 00:28:56.103 )") 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.103 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.103 { 00:28:56.103 "params": { 00:28:56.103 "name": "Nvme$subsystem", 00:28:56.103 "trtype": "$TEST_TRANSPORT", 00:28:56.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.103 "adrfam": "ipv4", 00:28:56.103 "trsvcid": "$NVMF_PORT", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.104 "hdgst": ${hdgst:-false}, 00:28:56.104 "ddgst": ${ddgst:-false} 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 } 00:28:56.104 EOF 00:28:56.104 )") 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.104 { 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme$subsystem", 00:28:56.104 "trtype": "$TEST_TRANSPORT", 00:28:56.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "$NVMF_PORT", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.104 "hdgst": ${hdgst:-false}, 00:28:56.104 "ddgst": ${ddgst:-false} 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 } 00:28:56.104 EOF 00:28:56.104 )") 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:56.104 { 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme$subsystem", 00:28:56.104 "trtype": "$TEST_TRANSPORT", 00:28:56.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "$NVMF_PORT", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.104 "hdgst": ${hdgst:-false}, 00:28:56.104 "ddgst": ${ddgst:-false} 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 } 00:28:56.104 EOF 00:28:56.104 )") 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:56.104 01:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme1", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme2", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme3", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme4", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme5", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme6", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme7", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme8", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme9", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 },{ 00:28:56.104 "params": { 00:28:56.104 "name": "Nvme10", 00:28:56.104 "trtype": "tcp", 00:28:56.104 "traddr": "10.0.0.2", 00:28:56.104 "adrfam": "ipv4", 00:28:56.104 "trsvcid": "4420", 00:28:56.104 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:56.104 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:56.104 "hdgst": false, 00:28:56.104 "ddgst": false 00:28:56.104 }, 00:28:56.104 "method": "bdev_nvme_attach_controller" 00:28:56.104 }' 00:28:56.104 [2024-10-01 01:46:35.791514] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:28:56.104 [2024-10-01 01:46:35.791604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:56.104 [2024-10-01 01:46:35.859629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.104 [2024-10-01 01:46:35.947143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 986975 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:58.013 01:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:59.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 986975 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 986901 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.391 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.391 { 00:28:59.391 "params": { 00:28:59.391 "name": "Nvme$subsystem", 00:28:59.391 "trtype": "$TEST_TRANSPORT", 00:28:59.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.391 "adrfam": "ipv4", 00:28:59.391 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:59.392 { 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme$subsystem", 00:28:59.392 "trtype": "$TEST_TRANSPORT", 00:28:59.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "$NVMF_PORT", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.392 "hdgst": ${hdgst:-false}, 00:28:59.392 "ddgst": ${ddgst:-false} 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 } 00:28:59.392 EOF 00:28:59.392 )") 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:28:59.392 01:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme1", 00:28:59.392 "trtype": "tcp", 00:28:59.392 "traddr": "10.0.0.2", 00:28:59.392 "adrfam": "ipv4", 00:28:59.392 "trsvcid": "4420", 00:28:59.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.392 "hdgst": false, 00:28:59.392 "ddgst": false 00:28:59.392 }, 00:28:59.392 "method": "bdev_nvme_attach_controller" 00:28:59.392 },{ 00:28:59.392 "params": { 00:28:59.392 "name": "Nvme2", 00:28:59.392 "trtype": "tcp", 00:28:59.392 "traddr": "10.0.0.2", 00:28:59.392 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme3", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme4", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme5", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme6", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme7", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme8", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme9", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 },{ 00:28:59.393 "params": { 00:28:59.393 "name": "Nvme10", 00:28:59.393 "trtype": "tcp", 00:28:59.393 "traddr": "10.0.0.2", 00:28:59.393 "adrfam": "ipv4", 00:28:59.393 "trsvcid": "4420", 00:28:59.393 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:59.393 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:59.393 "hdgst": false, 00:28:59.393 "ddgst": false 00:28:59.393 }, 00:28:59.393 "method": "bdev_nvme_attach_controller" 00:28:59.393 }' 00:28:59.393 [2024-10-01 01:46:38.866895] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:28:59.393 [2024-10-01 01:46:38.866986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987382 ] 00:28:59.393 [2024-10-01 01:46:38.933004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.393 [2024-10-01 01:46:39.019772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.773 Running I/O for 1 seconds... 00:29:01.966 1808.00 IOPS, 113.00 MiB/s 00:29:01.966 Latency(us) 00:29:01.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.966 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme1n1 : 1.14 225.06 14.07 0.00 0.00 281312.33 20874.43 254765.13 00:29:01.966 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme2n1 : 1.14 227.50 14.22 0.00 0.00 272354.55 8398.32 254765.13 00:29:01.966 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme3n1 : 1.15 277.93 17.37 0.00 0.00 219809.41 18738.44 251658.24 00:29:01.966 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme4n1 : 1.13 236.22 14.76 0.00 0.00 252012.38 8883.77 265639.25 00:29:01.966 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme5n1 : 1.18 217.69 13.61 0.00 0.00 272966.16 22622.06 254765.13 00:29:01.966 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme6n1 : 1.15 222.87 13.93 0.00 0.00 261638.45 28738.75 253211.69 00:29:01.966 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme7n1 : 1.12 228.56 14.29 0.00 0.00 249927.11 19223.89 254765.13 00:29:01.966 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme8n1 : 1.14 224.33 14.02 0.00 0.00 250818.18 21748.24 259425.47 00:29:01.966 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme9n1 : 1.18 216.29 13.52 0.00 0.00 256877.61 22524.97 281173.71 00:29:01.966 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.966 Verification LBA range: start 0x0 length 0x400 00:29:01.966 Nvme10n1 : 1.20 267.00 16.69 0.00 0.00 204734.16 7039.05 284280.60 00:29:01.966 =================================================================================================================== 00:29:01.966 Total : 2343.44 146.47 0.00 0.00 250380.32 7039.05 284280.60 00:29:02.224 01:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:02.224 01:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:02.224 01:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:02.224 01:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:02.224 01:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:02.224 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:02.224 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:02.224 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.225 rmmod nvme_tcp 00:29:02.225 rmmod nvme_fabrics 00:29:02.225 rmmod nvme_keyring 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 986901 ']' 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 986901 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 986901 ']' 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 986901 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:02.225 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 986901 00:29:02.484 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:02.484 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:02.484 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 986901' 00:29:02.484 killing process with pid 986901 00:29:02.484 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 986901 00:29:02.484 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 986901 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.051 01:46:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.962 00:29:04.962 real 0m12.134s 00:29:04.962 user 0m35.188s 00:29:04.962 sys 0m3.334s 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 ************************************ 00:29:04.962 END TEST nvmf_shutdown_tc1 00:29:04.962 ************************************ 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 ************************************ 00:29:04.962 START TEST nvmf_shutdown_tc2 00:29:04.962 ************************************ 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:04.962 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.962 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.963 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.963 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.963 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.963 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:29:05.222 00:29:05.222 --- 10.0.0.2 ping statistics --- 00:29:05.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.222 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:05.222 00:29:05.222 --- 10.0.0.1 ping statistics --- 00:29:05.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.222 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=988229 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 988229 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 988229 ']' 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.222 01:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.222 [2024-10-01 01:46:44.972956] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:05.222 [2024-10-01 01:46:44.973070] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.222 [2024-10-01 01:46:45.038658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.480 [2024-10-01 01:46:45.126794] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.481 [2024-10-01 01:46:45.126852] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.481 [2024-10-01 01:46:45.126876] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.481 [2024-10-01 01:46:45.126887] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.481 [2024-10-01 01:46:45.126896] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.481 [2024-10-01 01:46:45.126980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.481 [2024-10-01 01:46:45.127043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.481 [2024-10-01 01:46:45.127112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.481 [2024-10-01 01:46:45.127115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.481 [2024-10-01 01:46:45.269995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.481 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.481 Malloc1 00:29:05.741 [2024-10-01 01:46:45.344939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.741 Malloc2 00:29:05.741 Malloc3 00:29:05.741 Malloc4 00:29:05.741 Malloc5 00:29:05.741 Malloc6 00:29:06.001 Malloc7 00:29:06.001 Malloc8 00:29:06.001 Malloc9 00:29:06.001 Malloc10 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=988318 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 988318 /var/tmp/bdevperf.sock 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 988318 ']' 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:29:06.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.001 { 00:29:06.001 "params": { 00:29:06.001 "name": "Nvme$subsystem", 00:29:06.001 "trtype": "$TEST_TRANSPORT", 00:29:06.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.001 "adrfam": "ipv4", 00:29:06.001 "trsvcid": "$NVMF_PORT", 00:29:06.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.001 "hdgst": ${hdgst:-false}, 00:29:06.001 "ddgst": ${ddgst:-false} 00:29:06.001 }, 00:29:06.001 "method": "bdev_nvme_attach_controller" 00:29:06.001 } 00:29:06.001 EOF 00:29:06.001 )") 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.001 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.001 { 00:29:06.001 "params": { 00:29:06.001 "name": "Nvme$subsystem", 00:29:06.001 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:06.002 { 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme$subsystem", 00:29:06.002 "trtype": "$TEST_TRANSPORT", 00:29:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "$NVMF_PORT", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.002 "hdgst": ${hdgst:-false}, 00:29:06.002 "ddgst": ${ddgst:-false} 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 } 00:29:06.002 EOF 00:29:06.002 )") 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:29:06.002 01:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme1", 00:29:06.002 "trtype": "tcp", 00:29:06.002 "traddr": "10.0.0.2", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "4420", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.002 "hdgst": false, 00:29:06.002 "ddgst": false 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 },{ 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme2", 00:29:06.002 "trtype": "tcp", 00:29:06.002 "traddr": "10.0.0.2", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "4420", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.002 "hdgst": false, 00:29:06.002 "ddgst": false 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 },{ 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme3", 00:29:06.002 "trtype": "tcp", 00:29:06.002 "traddr": "10.0.0.2", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "4420", 00:29:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.002 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.002 "hdgst": false, 00:29:06.002 "ddgst": false 00:29:06.002 }, 00:29:06.002 "method": "bdev_nvme_attach_controller" 00:29:06.002 },{ 00:29:06.002 "params": { 00:29:06.002 "name": "Nvme4", 00:29:06.002 "trtype": "tcp", 00:29:06.002 "traddr": "10.0.0.2", 00:29:06.002 "adrfam": "ipv4", 00:29:06.002 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 },{ 00:29:06.003 "params": { 00:29:06.003 "name": "Nvme5", 00:29:06.003 "trtype": "tcp", 00:29:06.003 "traddr": "10.0.0.2", 00:29:06.003 "adrfam": "ipv4", 00:29:06.003 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 },{ 00:29:06.003 "params": { 00:29:06.003 "name": "Nvme6", 00:29:06.003 "trtype": "tcp", 00:29:06.003 "traddr": "10.0.0.2", 00:29:06.003 "adrfam": "ipv4", 00:29:06.003 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 },{ 00:29:06.003 "params": { 00:29:06.003 "name": "Nvme7", 00:29:06.003 "trtype": "tcp", 00:29:06.003 "traddr": "10.0.0.2", 00:29:06.003 "adrfam": "ipv4", 00:29:06.003 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 },{ 00:29:06.003 "params": { 00:29:06.003 "name": "Nvme8", 00:29:06.003 "trtype": "tcp", 00:29:06.003 "traddr": "10.0.0.2", 00:29:06.003 "adrfam": "ipv4", 00:29:06.003 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 },{ 00:29:06.003 "params": { 00:29:06.003 "name": "Nvme9", 00:29:06.003 "trtype": "tcp", 00:29:06.003 "traddr": "10.0.0.2", 00:29:06.003 "adrfam": "ipv4", 00:29:06.003 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 },{ 00:29:06.003 "params": { 00:29:06.003 "name": "Nvme10", 00:29:06.003 "trtype": "tcp", 00:29:06.003 "traddr": "10.0.0.2", 00:29:06.003 "adrfam": "ipv4", 00:29:06.003 "trsvcid": "4420", 00:29:06.003 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.003 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.003 "hdgst": false, 00:29:06.003 "ddgst": false 00:29:06.003 }, 00:29:06.003 "method": "bdev_nvme_attach_controller" 00:29:06.003 }' 00:29:06.263 [2024-10-01 01:46:45.862247] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:06.263 [2024-10-01 01:46:45.862349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988318 ] 00:29:06.263 [2024-10-01 01:46:45.926528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.263 [2024-10-01 01:46:46.014438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.640 Running I/O for 10 seconds... 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:08.209 01:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 988318 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 988318 ']' 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 988318 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 988318 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 988318' 00:29:08.470 killing process with pid 988318 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 988318 00:29:08.470 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 988318 00:29:08.729 Received shutdown signal, test time was about 0.858885 seconds 00:29:08.729 00:29:08.729 Latency(us) 00:29:08.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.729 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.729 Nvme1n1 : 0.84 228.34 14.27 0.00 0.00 276707.56 24855.13 257872.02 00:29:08.729 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.729 Nvme2n1 : 0.82 239.54 14.97 0.00 0.00 255707.45 7281.78 254765.13 00:29:08.729 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.729 Nvme3n1 : 0.83 231.06 14.44 0.00 0.00 261197.12 19515.16 257872.02 00:29:08.729 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.729 Nvme4n1 : 0.81 236.03 14.75 0.00 0.00 249032.25 21748.24 259425.47 00:29:08.729 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.729 Nvme5n1 : 0.85 225.39 14.09 0.00 0.00 255886.73 23204.60 256318.58 00:29:08.729 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.729 Nvme6n1 : 0.84 232.58 14.54 0.00 0.00 240124.67 5461.33 250104.79 00:29:08.729 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.729 Verification LBA range: start 0x0 length 0x400 00:29:08.730 Nvme7n1 : 0.83 257.29 16.08 0.00 0.00 208234.12 12136.30 240784.12 00:29:08.730 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.730 Verification LBA range: start 0x0 length 0x400 00:29:08.730 Nvme8n1 : 0.85 226.83 14.18 0.00 0.00 235789.27 20097.71 236123.78 00:29:08.730 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.730 Verification LBA range: start 0x0 length 0x400 00:29:08.730 Nvme9n1 : 0.86 223.98 14.00 0.00 0.00 233187.81 21748.24 282727.16 00:29:08.730 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:08.730 Verification LBA range: start 0x0 length 0x400 00:29:08.730 Nvme10n1 : 0.86 223.75 13.98 0.00 0.00 228046.06 36894.34 260978.92 00:29:08.730 =================================================================================================================== 00:29:08.730 Total : 2324.80 145.30 0.00 0.00 244022.18 5461.33 282727.16 00:29:08.989 01:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 988229 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.926 rmmod nvme_tcp 00:29:09.926 rmmod nvme_fabrics 00:29:09.926 rmmod nvme_keyring 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 988229 ']' 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 988229 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 988229 ']' 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 988229 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 988229 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 988229' 00:29:09.926 killing process with pid 988229 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 988229 00:29:09.926 01:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 988229 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.500 01:46:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.040 00:29:13.040 real 0m7.541s 00:29:13.040 user 0m22.434s 00:29:13.040 sys 0m1.518s 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.040 ************************************ 00:29:13.040 END TEST nvmf_shutdown_tc2 00:29:13.040 ************************************ 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:13.040 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.040 ************************************ 00:29:13.040 START TEST nvmf_shutdown_tc3 00:29:13.041 ************************************ 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:13.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:13.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.041 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:13.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:13.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:29:13.042 00:29:13.042 --- 10.0.0.2 ping statistics --- 00:29:13.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.042 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:13.042 00:29:13.042 --- 10.0.0.1 ping statistics --- 00:29:13.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.042 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=989228 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 989228 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 989228 ']' 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.042 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.043 [2024-10-01 01:46:52.560559] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:13.043 [2024-10-01 01:46:52.560633] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.043 [2024-10-01 01:46:52.625918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.043 [2024-10-01 01:46:52.711642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.043 [2024-10-01 01:46:52.711695] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.043 [2024-10-01 01:46:52.711718] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.043 [2024-10-01 01:46:52.711728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.043 [2024-10-01 01:46:52.711738] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.043 [2024-10-01 01:46:52.711821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.043 [2024-10-01 01:46:52.711885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.043 [2024-10-01 01:46:52.711952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.043 [2024-10-01 01:46:52.711954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.043 [2024-10-01 01:46:52.857466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.043 01:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.303 Malloc1 00:29:13.303 [2024-10-01 01:46:52.932678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.303 Malloc2 00:29:13.303 Malloc3 00:29:13.303 Malloc4 00:29:13.303 Malloc5 00:29:13.303 Malloc6 00:29:13.561 Malloc7 00:29:13.561 Malloc8 00:29:13.561 Malloc9 00:29:13.561 Malloc10 00:29:13.561 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.561 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:13.561 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=989405 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 989405 /var/tmp/bdevperf.sock 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 989405 ']' 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.562 { 00:29:13.562 "params": { 00:29:13.562 "name": "Nvme$subsystem", 00:29:13.562 "trtype": "$TEST_TRANSPORT", 00:29:13.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.562 "adrfam": "ipv4", 00:29:13.562 "trsvcid": "$NVMF_PORT", 00:29:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.562 "hdgst": ${hdgst:-false}, 00:29:13.562 "ddgst": ${ddgst:-false} 00:29:13.562 }, 00:29:13.562 "method": "bdev_nvme_attach_controller" 00:29:13.562 } 00:29:13.562 EOF 00:29:13.562 )") 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.562 { 00:29:13.562 "params": { 00:29:13.562 "name": "Nvme$subsystem", 00:29:13.562 "trtype": "$TEST_TRANSPORT", 00:29:13.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.562 "adrfam": "ipv4", 00:29:13.562 "trsvcid": "$NVMF_PORT", 00:29:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.562 "hdgst": ${hdgst:-false}, 00:29:13.562 "ddgst": ${ddgst:-false} 00:29:13.562 }, 00:29:13.562 "method": "bdev_nvme_attach_controller" 00:29:13.562 } 00:29:13.562 EOF 00:29:13.562 )") 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.562 { 00:29:13.562 "params": { 00:29:13.562 "name": "Nvme$subsystem", 00:29:13.562 "trtype": "$TEST_TRANSPORT", 00:29:13.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.562 "adrfam": "ipv4", 00:29:13.562 "trsvcid": "$NVMF_PORT", 00:29:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.562 "hdgst": ${hdgst:-false}, 00:29:13.562 "ddgst": ${ddgst:-false} 00:29:13.562 }, 00:29:13.562 "method": "bdev_nvme_attach_controller" 00:29:13.562 } 00:29:13.562 EOF 00:29:13.562 )") 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.562 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.562 { 00:29:13.562 "params": { 00:29:13.562 "name": "Nvme$subsystem", 00:29:13.562 "trtype": "$TEST_TRANSPORT", 00:29:13.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.562 "adrfam": "ipv4", 00:29:13.562 "trsvcid": "$NVMF_PORT", 00:29:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.562 "hdgst": ${hdgst:-false}, 00:29:13.562 "ddgst": ${ddgst:-false} 00:29:13.562 }, 00:29:13.562 "method": "bdev_nvme_attach_controller" 00:29:13.562 } 00:29:13.562 EOF 00:29:13.562 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.825 { 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme$subsystem", 00:29:13.825 "trtype": "$TEST_TRANSPORT", 00:29:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "$NVMF_PORT", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.825 "hdgst": ${hdgst:-false}, 00:29:13.825 "ddgst": ${ddgst:-false} 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 } 00:29:13.825 EOF 00:29:13.825 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.825 { 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme$subsystem", 00:29:13.825 "trtype": "$TEST_TRANSPORT", 00:29:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "$NVMF_PORT", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.825 "hdgst": ${hdgst:-false}, 00:29:13.825 "ddgst": ${ddgst:-false} 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 } 00:29:13.825 EOF 00:29:13.825 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.825 { 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme$subsystem", 00:29:13.825 "trtype": "$TEST_TRANSPORT", 00:29:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "$NVMF_PORT", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.825 "hdgst": ${hdgst:-false}, 00:29:13.825 "ddgst": ${ddgst:-false} 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 } 00:29:13.825 EOF 00:29:13.825 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.825 { 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme$subsystem", 00:29:13.825 "trtype": "$TEST_TRANSPORT", 00:29:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "$NVMF_PORT", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.825 "hdgst": ${hdgst:-false}, 00:29:13.825 "ddgst": ${ddgst:-false} 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 } 00:29:13.825 EOF 00:29:13.825 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.825 { 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme$subsystem", 00:29:13.825 "trtype": "$TEST_TRANSPORT", 00:29:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "$NVMF_PORT", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.825 "hdgst": ${hdgst:-false}, 00:29:13.825 "ddgst": ${ddgst:-false} 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 } 00:29:13.825 EOF 00:29:13.825 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:13.825 { 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme$subsystem", 00:29:13.825 "trtype": "$TEST_TRANSPORT", 00:29:13.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "$NVMF_PORT", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.825 "hdgst": ${hdgst:-false}, 00:29:13.825 "ddgst": ${ddgst:-false} 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 } 00:29:13.825 EOF 00:29:13.825 )") 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:29:13.825 01:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme1", 00:29:13.825 "trtype": "tcp", 00:29:13.825 "traddr": "10.0.0.2", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "4420", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:13.825 "hdgst": false, 00:29:13.825 "ddgst": false 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 },{ 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme2", 00:29:13.825 "trtype": "tcp", 00:29:13.825 "traddr": "10.0.0.2", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "4420", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:13.825 "hdgst": false, 00:29:13.825 "ddgst": false 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 },{ 00:29:13.825 "params": { 00:29:13.825 "name": "Nvme3", 00:29:13.825 "trtype": "tcp", 00:29:13.825 "traddr": "10.0.0.2", 00:29:13.825 "adrfam": "ipv4", 00:29:13.825 "trsvcid": "4420", 00:29:13.825 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:13.825 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:13.825 "hdgst": false, 00:29:13.825 "ddgst": false 00:29:13.825 }, 00:29:13.825 "method": "bdev_nvme_attach_controller" 00:29:13.825 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme4", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme5", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme6", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme7", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme8", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme9", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 },{ 00:29:13.826 "params": { 00:29:13.826 "name": "Nvme10", 00:29:13.826 "trtype": "tcp", 00:29:13.826 "traddr": "10.0.0.2", 00:29:13.826 "adrfam": "ipv4", 00:29:13.826 "trsvcid": "4420", 00:29:13.826 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:13.826 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:13.826 "hdgst": false, 00:29:13.826 "ddgst": false 00:29:13.826 }, 00:29:13.826 "method": "bdev_nvme_attach_controller" 00:29:13.826 }' 00:29:13.826 [2024-10-01 01:46:53.451134] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:13.826 [2024-10-01 01:46:53.451214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989405 ] 00:29:13.826 [2024-10-01 01:46:53.514843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.826 [2024-10-01 01:46:53.601555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.734 Running I/O for 10 seconds... 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:15.734 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:15.735 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=82 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 82 -ge 100 ']' 00:29:15.995 01:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:16.255 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:16.255 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:16.255 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:16.255 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:16.255 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.255 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 989228 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 989228 ']' 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 989228 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 989228 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:16.534 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:16.535 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 989228' 00:29:16.535 killing process with pid 989228 00:29:16.535 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 989228 00:29:16.535 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 989228 00:29:16.535 [2024-10-01 01:46:56.180169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.180891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17667e0 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.182163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.182198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.182215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.182227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.182239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.535 [2024-10-01 01:46:56.182252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.182978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.183023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769390 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.536 [2024-10-01 01:46:56.185684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.185980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.186469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767180 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.187790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.187833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.187849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.187862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.537 [2024-10-01 01:46:56.187874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.187910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.187925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.187937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.187950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.187962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.187974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.188673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767670 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.189581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.189608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.189622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.189634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.538 [2024-10-01 01:46:56.189646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.189993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.190418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767b40 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.539 [2024-10-01 01:46:56.191671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.191982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.540 [2024-10-01 01:46:56.192286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.192398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768030 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with [2024-10-01 01:46:56.194699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:16.541 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-10-01 01:46:56.194751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.194766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.194869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 [2024-10-01 01:46:56.194907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.541 [2024-10-01 01:46:56.194919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.194931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.541 the state(6) to be set 00:29:16.541 [2024-10-01 01:46:56.194945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.194947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.194957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.194962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.194971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.194977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-10-01 01:46:56.195029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with [2024-10-01 01:46:56.195045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:16.542 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.195111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.195175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with [2024-10-01 01:46:56.195191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1the state(6) to be set 00:29:16.542 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with [2024-10-01 01:46:56.195210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:16.542 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-10-01 01:46:56.195261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with [2024-10-01 01:46:56.195275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:16.542 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-10-01 01:46:56.195342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.195354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.542 [2024-10-01 01:46:56.195393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.542 [2024-10-01 01:46:56.195408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.542 [2024-10-01 01:46:56.195414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.195446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with [2024-10-01 01:46:56.195518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1the state(6) to be set 00:29:16.543 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17689d0 is same with the state(6) to be set 00:29:16.543 [2024-10-01 01:46:56.195549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.195965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.195980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.196025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.196044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.196059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.196074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.196104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.196118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.196133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.196147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.196163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.543 [2024-10-01 01:46:56.196178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.543 [2024-10-01 01:46:56.196194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with [2024-10-01 01:46:56.196284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(6) to be set 00:29:16.544 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 01:46:56.196345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with [2024-10-01 01:46:56.196429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1the state(6) to be set 00:29:16.544 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with [2024-10-01 01:46:56.196448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:16.544 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.544 [2024-10-01 01:46:56.196473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.544 [2024-10-01 01:46:56.196485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-10-01 01:46:56.196522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with ce or address) on qpair id 1 00:29:16.544 the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196609] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a3940 was disconnected and freed. reset controller. 00:29:16.544 [2024-10-01 01:46:56.196619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.544 [2024-10-01 01:46:56.196886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-10-01 01:46:56.196963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with id:0 cdw10:00000000 cdw11:00000000 00:29:16.545 the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-01 01:46:56.197009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with [2024-10-01 01:46:56.197034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:29:16.545 id:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with [2024-10-01 01:46:56.197063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:29:16.545 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-10-01 01:46:56.197131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with id:0 cdw10:00000000 cdw11:00000000 00:29:16.545 the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with [2024-10-01 01:46:56.197145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:29:16.545 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768ea0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219170 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220df40 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccde0 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da1d40 is same with the state(6) to be set 00:29:16.545 [2024-10-01 01:46:56.197760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.545 [2024-10-01 01:46:56.197867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.545 [2024-10-01 01:46:56.197881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.197894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f5d0 is same with the state(6) to be set 00:29:16.546 [2024-10-01 01:46:56.197942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.197963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.197978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97530 is same with the state(6) to be set 00:29:16.546 [2024-10-01 01:46:56.198126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d979a0 is same with the state(6) to be set 00:29:16.546 [2024-10-01 01:46:56.198293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96a20 is same with the state(6) to be set 00:29:16.546 [2024-10-01 01:46:56.198457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.546 [2024-10-01 01:46:56.198570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccc00 is same with the state(6) to be set 00:29:16.546 [2024-10-01 01:46:56.198744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.198965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.198981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.199010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.546 [2024-10-01 01:46:56.199028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.546 [2024-10-01 01:46:56.199042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.547 [2024-10-01 01:46:56.199936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.547 [2024-10-01 01:46:56.199951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.199964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.199979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.548 [2024-10-01 01:46:56.200772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.548 [2024-10-01 01:46:56.200786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa7350 is same with the state(6) to be set 00:29:16.548 [2024-10-01 01:46:56.200852] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fa7350 was disconnected and freed. reset controller. 00:29:16.548 [2024-10-01 01:46:56.202455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:16.548 [2024-10-01 01:46:56.202513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccc00 (9): Bad file descriptor 00:29:16.548 [2024-10-01 01:46:56.204188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:16.548 [2024-10-01 01:46:56.204232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d97530 (9): Bad file descriptor 00:29:16.548 [2024-10-01 01:46:56.205259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.548 [2024-10-01 01:46:56.205304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccc00 with addr=10.0.0.2, port=4420 00:29:16.548 [2024-10-01 01:46:56.205323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccc00 is same with the state(6) to be set 00:29:16.548 [2024-10-01 01:46:56.205909] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.548 [2024-10-01 01:46:56.205982] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.548 [2024-10-01 01:46:56.206066] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.548 [2024-10-01 01:46:56.206133] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.549 [2024-10-01 01:46:56.206216] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.549 [2024-10-01 01:46:56.206345] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.549 [2024-10-01 01:46:56.206533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.549 [2024-10-01 01:46:56.206569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d97530 with addr=10.0.0.2, port=4420 00:29:16.549 [2024-10-01 01:46:56.206597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97530 is same with the state(6) to be set 00:29:16.549 [2024-10-01 01:46:56.206622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccc00 (9): Bad file descriptor 00:29:16.549 [2024-10-01 01:46:56.206673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.206959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.206975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.549 [2024-10-01 01:46:56.207536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.549 [2024-10-01 01:46:56.207552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.207971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.207985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.550 [2024-10-01 01:46:56.208492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.550 [2024-10-01 01:46:56.208507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.208521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.208536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.208550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.208639] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fa6130 was disconnected and freed. reset controller. 00:29:16.551 [2024-10-01 01:46:56.208808] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:16.551 [2024-10-01 01:46:56.208868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d97530 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.208900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:16.551 [2024-10-01 01:46:56.208926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:16.551 [2024-10-01 01:46:56.208945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:16.551 [2024-10-01 01:46:56.209041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.551 [2024-10-01 01:46:56.209069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.209096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.551 [2024-10-01 01:46:56.209118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.209135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.551 [2024-10-01 01:46:56.209149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.209163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:16.551 [2024-10-01 01:46:56.209177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.209189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d3c0 is same with the state(6) to be set 00:29:16.551 [2024-10-01 01:46:56.209225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219170 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.209261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220df40 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.209307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccde0 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.209339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da1d40 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.209379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9f5d0 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.209411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d979a0 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.209446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d96a20 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.210678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.551 [2024-10-01 01:46:56.210705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.551 [2024-10-01 01:46:56.210739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:16.551 [2024-10-01 01:46:56.210756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:16.551 [2024-10-01 01:46:56.210770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:16.551 [2024-10-01 01:46:56.210847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.551 [2024-10-01 01:46:56.210971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.551 [2024-10-01 01:46:56.211017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da1d40 with addr=10.0.0.2, port=4420 00:29:16.551 [2024-10-01 01:46:56.211036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da1d40 is same with the state(6) to be set 00:29:16.551 [2024-10-01 01:46:56.211366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da1d40 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.211435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.551 [2024-10-01 01:46:56.211454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.551 [2024-10-01 01:46:56.211479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.551 [2024-10-01 01:46:56.211544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.551 [2024-10-01 01:46:56.214406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:16.551 [2024-10-01 01:46:56.214607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.551 [2024-10-01 01:46:56.214637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccc00 with addr=10.0.0.2, port=4420 00:29:16.551 [2024-10-01 01:46:56.214654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccc00 is same with the state(6) to be set 00:29:16.551 [2024-10-01 01:46:56.214712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccc00 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.214768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:16.551 [2024-10-01 01:46:56.214786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:16.551 [2024-10-01 01:46:56.214801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:16.551 [2024-10-01 01:46:56.214858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.551 [2024-10-01 01:46:56.215482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:16.551 [2024-10-01 01:46:56.215671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.551 [2024-10-01 01:46:56.215700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d97530 with addr=10.0.0.2, port=4420 00:29:16.551 [2024-10-01 01:46:56.215717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97530 is same with the state(6) to be set 00:29:16.551 [2024-10-01 01:46:56.215794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d97530 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.215852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:16.551 [2024-10-01 01:46:56.215873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:16.551 [2024-10-01 01:46:56.215888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:16.551 [2024-10-01 01:46:56.215943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.551 [2024-10-01 01:46:56.218912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d3c0 (9): Bad file descriptor 00:29:16.551 [2024-10-01 01:46:56.219133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.219161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.219192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.219207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.219225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.219240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.219256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.219270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.219285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.219299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.551 [2024-10-01 01:46:56.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.551 [2024-10-01 01:46:56.219340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.219970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.219985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.220019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.220034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.220080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.220094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.220110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.220123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.220139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.220153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.552 [2024-10-01 01:46:56.220170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.552 [2024-10-01 01:46:56.220184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.220961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.220975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.221027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.221057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.221087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.221123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.221152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.553 [2024-10-01 01:46:56.221183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.553 [2024-10-01 01:46:56.221199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.221213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.221228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cea0 is same with the state(6) to be set 00:29:16.554 [2024-10-01 01:46:56.222553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.222980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.222994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.554 [2024-10-01 01:46:56.223458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.554 [2024-10-01 01:46:56.223474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.223958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.223975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.555 [2024-10-01 01:46:56.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.555 [2024-10-01 01:46:56.224428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.224445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.224462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.224475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.224491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.224505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.224521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.224534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.224550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.224563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.224579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.224593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.224612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e360 is same with the state(6) to be set 00:29:16.556 [2024-10-01 01:46:56.225842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.225866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.225887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.225902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.225918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.225932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.225962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.225977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.225991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.556 [2024-10-01 01:46:56.226557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.556 [2024-10-01 01:46:56.226573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.226977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.226990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.557 [2024-10-01 01:46:56.227475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.557 [2024-10-01 01:46:56.227490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.227805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.227818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f890 is same with the state(6) to be set 00:29:16.558 [2024-10-01 01:46:56.229078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-10-01 01:46:56.229480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.558 [2024-10-01 01:46:56.229496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.229976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.229992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.559 [2024-10-01 01:46:56.230418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-10-01 01:46:56.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.230967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.230981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.231010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.231026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.231042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.231056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.231072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.231086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.231100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a23e0 is same with the state(6) to be set 00:29:16.560 [2024-10-01 01:46:56.232379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.560 [2024-10-01 01:46:56.232604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.560 [2024-10-01 01:46:56.232619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.232965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.232981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.561 [2024-10-01 01:46:56.233551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.561 [2024-10-01 01:46:56.233567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.233965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.233978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.562 [2024-10-01 01:46:56.234398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.562 [2024-10-01 01:46:56.234412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a4ec0 is same with the state(6) to be set 00:29:16.563 [2024-10-01 01:46:56.235688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.235970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.235986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.563 [2024-10-01 01:46:56.236635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.563 [2024-10-01 01:46:56.236650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.236972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.236989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.564 [2024-10-01 01:46:56.237415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.564 [2024-10-01 01:46:56.237430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.237702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.237717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a7880 is same with the state(6) to be set 00:29:16.565 [2024-10-01 01:46:56.239804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:16.565 [2024-10-01 01:46:56.239850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:16.565 [2024-10-01 01:46:56.239869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:16.565 [2024-10-01 01:46:56.239887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:16.565 [2024-10-01 01:46:56.240036] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.565 [2024-10-01 01:46:56.240065] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.565 [2024-10-01 01:46:56.255882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:16.565 [2024-10-01 01:46:56.255979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:16.565 [2024-10-01 01:46:56.256330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.565 [2024-10-01 01:46:56.256377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d9f5d0 with addr=10.0.0.2, port=4420 00:29:16.565 [2024-10-01 01:46:56.256398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9f5d0 is same with the state(6) to be set 00:29:16.565 [2024-10-01 01:46:56.256540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.565 [2024-10-01 01:46:56.256567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d979a0 with addr=10.0.0.2, port=4420 00:29:16.565 [2024-10-01 01:46:56.256584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d979a0 is same with the state(6) to be set 00:29:16.565 [2024-10-01 01:46:56.256694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.565 [2024-10-01 01:46:56.256721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccde0 with addr=10.0.0.2, port=4420 00:29:16.565 [2024-10-01 01:46:56.256737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccde0 is same with the state(6) to be set 00:29:16.565 [2024-10-01 01:46:56.256863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.565 [2024-10-01 01:46:56.256891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d96a20 with addr=10.0.0.2, port=4420 00:29:16.565 [2024-10-01 01:46:56.256907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96a20 is same with the state(6) to be set 00:29:16.565 [2024-10-01 01:46:56.256950] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.565 [2024-10-01 01:46:56.256975] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.565 [2024-10-01 01:46:56.257018] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.565 [2024-10-01 01:46:56.257042] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.565 [2024-10-01 01:46:56.257074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d96a20 (9): Bad file descriptor 00:29:16.565 [2024-10-01 01:46:56.257102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccde0 (9): Bad file descriptor 00:29:16.565 [2024-10-01 01:46:56.257127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d979a0 (9): Bad file descriptor 00:29:16.565 [2024-10-01 01:46:56.257150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9f5d0 (9): Bad file descriptor 00:29:16.565 [2024-10-01 01:46:56.258602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.565 [2024-10-01 01:46:56.258929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.565 [2024-10-01 01:46:56.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.258959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.258975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.566 [2024-10-01 01:46:56.259871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.566 [2024-10-01 01:46:56.259885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.259900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.259914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.259929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.259944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.259960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.259973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.567 [2024-10-01 01:46:56.260646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.567 [2024-10-01 01:46:56.260661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6300 is same with the state(6) to be set 00:29:16.567 task offset: 24576 on job bdev=Nvme7n1 fails 00:29:16.567 1683.04 IOPS, 105.19 MiB/s [2024-10-01 01:46:56.262695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.567 [2024-10-01 01:46:56.262739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:16.567 [2024-10-01 01:46:56.262757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:16.567 00:29:16.567 Latency(us) 00:29:16.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.567 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.567 Job: Nvme1n1 ended in about 0.98 seconds with error 00:29:16.567 Verification LBA range: start 0x0 length 0x400 00:29:16.567 Nvme1n1 : 0.98 199.40 12.46 61.04 0.00 243122.06 4174.89 265639.25 00:29:16.567 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.567 Job: Nvme2n1 ended in about 0.98 seconds with error 00:29:16.567 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme2n1 : 0.98 196.70 12.29 65.57 0.00 236909.42 9466.31 262532.36 00:29:16.568 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme3n1 ended in about 0.99 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme3n1 : 0.99 193.01 12.06 64.34 0.00 236941.08 29903.83 251658.24 00:29:16.568 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme4n1 ended in about 1.00 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme4n1 : 1.00 196.38 12.27 64.12 0.00 229555.80 17573.36 259425.47 00:29:16.568 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme5n1 ended in about 1.00 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme5n1 : 1.00 127.84 7.99 63.92 0.00 305881.95 21748.24 281173.71 00:29:16.568 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme6n1 ended in about 1.00 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme6n1 : 1.00 191.13 11.95 63.71 0.00 225606.92 20000.62 243891.01 00:29:16.568 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme7n1 ended in about 0.97 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme7n1 : 0.97 197.00 12.31 65.67 0.00 213563.54 19126.80 253211.69 00:29:16.568 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme8n1 ended in about 1.01 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme8n1 : 1.01 127.00 7.94 63.50 0.00 289998.82 19709.35 262532.36 00:29:16.568 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme9n1 ended in about 1.03 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme9n1 : 1.03 123.78 7.74 61.89 0.00 292915.07 19806.44 290494.39 00:29:16.568 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:16.568 Job: Nvme10n1 ended in about 1.01 seconds with error 00:29:16.568 Verification LBA range: start 0x0 length 0x400 00:29:16.568 Nvme10n1 : 1.01 126.59 7.91 63.29 0.00 279505.48 19903.53 267192.70 00:29:16.568 =================================================================================================================== 00:29:16.568 Total : 1678.82 104.93 637.04 0.00 251287.25 4174.89 290494.39 00:29:16.568 [2024-10-01 01:46:56.289126] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:16.568 [2024-10-01 01:46:56.289213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:16.568 [2024-10-01 01:46:56.289555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.568 [2024-10-01 01:46:56.289592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220df40 with addr=10.0.0.2, port=4420 00:29:16.568 [2024-10-01 01:46:56.289613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220df40 is same with the state(6) to be set 00:29:16.568 [2024-10-01 01:46:56.289735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.568 [2024-10-01 01:46:56.289763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2219170 with addr=10.0.0.2, port=4420 00:29:16.568 [2024-10-01 01:46:56.289780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219170 is same with the state(6) to be set 00:29:16.568 [2024-10-01 01:46:56.290121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.568 [2024-10-01 01:46:56.290152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da1d40 with addr=10.0.0.2, port=4420 00:29:16.568 [2024-10-01 01:46:56.290170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da1d40 is same with the state(6) to be set 00:29:16.568 [2024-10-01 01:46:56.290268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.568 [2024-10-01 01:46:56.290303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccc00 with addr=10.0.0.2, port=4420 00:29:16.568 [2024-10-01 01:46:56.290319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccc00 is same with the state(6) to be set 00:29:16.568 [2024-10-01 01:46:56.290427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.568 [2024-10-01 01:46:56.290454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d97530 with addr=10.0.0.2, port=4420 00:29:16.568 [2024-10-01 01:46:56.290470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97530 is same with the state(6) to be set 00:29:16.568 [2024-10-01 01:46:56.290595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.568 [2024-10-01 01:46:56.290620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220d3c0 with addr=10.0.0.2, port=4420 00:29:16.568 [2024-10-01 01:46:56.290636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d3c0 is same with the state(6) to be set 00:29:16.568 [2024-10-01 01:46:56.290661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220df40 (9): Bad file descriptor 00:29:16.568 [2024-10-01 01:46:56.290683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219170 (9): Bad file descriptor 00:29:16.568 [2024-10-01 01:46:56.290700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:16.568 [2024-10-01 01:46:56.290715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:16.568 [2024-10-01 01:46:56.290731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:16.568 [2024-10-01 01:46:56.290754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:16.568 [2024-10-01 01:46:56.290769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:16.568 [2024-10-01 01:46:56.290795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:16.568 [2024-10-01 01:46:56.290814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:16.568 [2024-10-01 01:46:56.290829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:16.568 [2024-10-01 01:46:56.290843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:16.568 [2024-10-01 01:46:56.290860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:16.568 [2024-10-01 01:46:56.290874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:16.568 [2024-10-01 01:46:56.290887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:16.568 [2024-10-01 01:46:56.290915] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.568 [2024-10-01 01:46:56.290938] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.568 [2024-10-01 01:46:56.290960] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.568 [2024-10-01 01:46:56.290990] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.568 [2024-10-01 01:46:56.291019] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.568 [2024-10-01 01:46:56.291039] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:16.569 [2024-10-01 01:46:56.291446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.291471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.291484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.291496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.291512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da1d40 (9): Bad file descriptor 00:29:16.569 [2024-10-01 01:46:56.291533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccc00 (9): Bad file descriptor 00:29:16.569 [2024-10-01 01:46:56.291551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d97530 (9): Bad file descriptor 00:29:16.569 [2024-10-01 01:46:56.291569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220d3c0 (9): Bad file descriptor 00:29:16.569 [2024-10-01 01:46:56.291585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:16.569 [2024-10-01 01:46:56.291598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:16.569 [2024-10-01 01:46:56.291611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:16.569 [2024-10-01 01:46:56.291629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:16.569 [2024-10-01 01:46:56.291643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:16.569 [2024-10-01 01:46:56.291657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:16.569 [2024-10-01 01:46:56.291944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.291969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.291993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.569 [2024-10-01 01:46:56.292017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.569 [2024-10-01 01:46:56.292037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.569 [2024-10-01 01:46:56.292056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:16.569 [2024-10-01 01:46:56.292071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:16.569 [2024-10-01 01:46:56.292085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:16.569 [2024-10-01 01:46:56.292101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:16.569 [2024-10-01 01:46:56.292115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:16.569 [2024-10-01 01:46:56.292128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:16.569 [2024-10-01 01:46:56.292144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:16.569 [2024-10-01 01:46:56.292158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:16.569 [2024-10-01 01:46:56.292172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:16.569 [2024-10-01 01:46:56.292220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.292239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.292251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.569 [2024-10-01 01:46:56.292262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.134 01:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 989405 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 989405 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 989405 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.078 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.078 rmmod nvme_tcp 00:29:18.078 rmmod nvme_fabrics 00:29:18.078 rmmod nvme_keyring 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 989228 ']' 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 989228 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 989228 ']' 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 989228 00:29:18.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (989228) - No such process 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 989228 is not found' 00:29:18.079 Process with pid 989228 is not found 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.079 01:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.984 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.984 00:29:19.984 real 0m7.490s 00:29:19.984 user 0m18.479s 00:29:19.984 sys 0m1.482s 00:29:19.984 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.984 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:19.984 ************************************ 00:29:19.984 END TEST nvmf_shutdown_tc3 00:29:19.984 ************************************ 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:20.243 ************************************ 00:29:20.243 START TEST nvmf_shutdown_tc4 00:29:20.243 ************************************ 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:20.243 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:20.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:20.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:20.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:20.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.244 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:29:20.245 00:29:20.245 --- 10.0.0.2 ping statistics --- 00:29:20.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.245 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:29:20.245 01:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:29:20.245 00:29:20.245 --- 10.0.0.1 ping statistics --- 00:29:20.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.245 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=990315 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 990315 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 990315 ']' 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.245 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.245 [2024-10-01 01:47:00.090332] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:20.245 [2024-10-01 01:47:00.090427] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.505 [2024-10-01 01:47:00.159240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.505 [2024-10-01 01:47:00.249926] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.505 [2024-10-01 01:47:00.249991] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.505 [2024-10-01 01:47:00.250026] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.505 [2024-10-01 01:47:00.250038] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.505 [2024-10-01 01:47:00.250048] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.505 [2024-10-01 01:47:00.250135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.505 [2024-10-01 01:47:00.250197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.505 [2024-10-01 01:47:00.250264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.505 [2024-10-01 01:47:00.250267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.765 [2024-10-01 01:47:00.410923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.765 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.765 Malloc1 00:29:20.765 [2024-10-01 01:47:00.504414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.765 Malloc2 00:29:20.765 Malloc3 00:29:21.023 Malloc4 00:29:21.023 Malloc5 00:29:21.023 Malloc6 00:29:21.024 Malloc7 00:29:21.024 Malloc8 00:29:21.283 Malloc9 00:29:21.283 Malloc10 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=990487 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:21.283 01:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:21.283 [2024-10-01 01:47:01.028694] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 990315 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 990315 ']' 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 990315 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.626 01:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 990315 00:29:26.626 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:26.626 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:26.626 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 990315' 00:29:26.626 killing process with pid 990315 00:29:26.626 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 990315 00:29:26.626 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 990315 00:29:26.626 [2024-10-01 01:47:06.024439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026900 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.024570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026900 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.024595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026900 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.024609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026900 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.024626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026900 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.030470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6a20 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.032606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122750 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 [2024-10-01 01:47:06.034296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb6550 is same with the state(6) to be set 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 starting I/O failed: -6 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 starting I/O failed: -6 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.626 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 [2024-10-01 01:47:06.035877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 [2024-10-01 01:47:06.037100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 [2024-10-01 01:47:06.038339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 [2024-10-01 01:47:06.038635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254d0 is same with starting I/O failed: -6 00:29:26.627 the state(6) to be set 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.627 starting I/O failed: -6 00:29:26.627 [2024-10-01 01:47:06.038672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254d0 is same with the state(6) to be set 00:29:26.627 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.038688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254d0 is same with the state(6) to be set 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.038700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254d0 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.038713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254d0 is same with the state(6) to be set 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.038726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21254d0 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21259a0 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21259a0 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21259a0 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21259a0 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.039446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125e70 is same with starting I/O failed: -6 00:29:26.628 the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.039480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125e70 is same with the state(6) to be set 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125e70 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125e70 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.039521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125e70 is same with the state(6) to be set 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.039935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 starting I/O failed: -6 00:29:26.628 [2024-10-01 01:47:06.039948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 [2024-10-01 01:47:06.039960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 [2024-10-01 01:47:06.039972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 [2024-10-01 01:47:06.039994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125000 is same with the state(6) to be set 00:29:26.628 [2024-10-01 01:47:06.040111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.628 NVMe io qpair process completion error 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 [2024-10-01 01:47:06.041318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 starting I/O failed: -6 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.628 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 [2024-10-01 01:47:06.042274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 [2024-10-01 01:47:06.042704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124660 is same with the state(6) to be set 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 [2024-10-01 01:47:06.042743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124660 is same with the state(6) to be set 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 [2024-10-01 01:47:06.042758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124660 is same with the state(6) to be set 00:29:26.629 starting I/O failed: -6 00:29:26.629 [2024-10-01 01:47:06.042770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124660 is same with the state(6) to be set 00:29:26.629 [2024-10-01 01:47:06.042781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124660 is same with the state(6) to be set 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 [2024-10-01 01:47:06.042793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124660 is same with the state(6) to be set 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 [2024-10-01 01:47:06.043506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.629 Write completed with error (sct=0, sc=8) 00:29:26.629 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 [2024-10-01 01:47:06.044800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123470 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.044826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123470 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.044857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123470 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.044870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123470 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.044882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123470 is same with starting I/O failed: -6 00:29:26.630 the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.044896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123470 is same with Write completed with error (sct=0, sc=8) 00:29:26.630 the state(6) to be set 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 [2024-10-01 01:47:06.045150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.630 NVMe io qpair process completion error 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with Write completed with error (sct=0, sc=8) 00:29:26.630 the state(6) to be set 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.049397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.049434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 [2024-10-01 01:47:06.049447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.049472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091b30 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.049992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with starting I/O failed: -6 00:29:26.630 the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.050018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with the state(6) to be set 00:29:26.630 starting I/O failed: -6 00:29:26.630 [2024-10-01 01:47:06.050032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.050045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.050057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with Write completed with error (sct=0, sc=8) 00:29:26.630 the state(6) to be set 00:29:26.630 [2024-10-01 01:47:06.050071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.050083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2170 is same with starting I/O failed: -6 00:29:26.630 the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 starting I/O failed: -6 00:29:26.630 [2024-10-01 01:47:06.050341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2660 is same with Write completed with error (sct=0, sc=8) 00:29:26.630 the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.630 [2024-10-01 01:47:06.050374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2660 is same with the state(6) to be set 00:29:26.630 Write completed with error (sct=0, sc=8) 00:29:26.631 [2024-10-01 01:47:06.050397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2660 is same with starting I/O failed: -6 00:29:26.631 the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 [2024-10-01 01:47:06.050471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 [2024-10-01 01:47:06.050506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 [2024-10-01 01:47:06.050522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 [2024-10-01 01:47:06.050541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 [2024-10-01 01:47:06.050555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with starting I/O failed: -6 00:29:26.631 the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 [2024-10-01 01:47:06.050569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 starting I/O failed: -6 00:29:26.631 [2024-10-01 01:47:06.050583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 [2024-10-01 01:47:06.050596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with the state(6) to be set 00:29:26.631 [2024-10-01 01:47:06.050608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2091660 is same with Write completed with error (sct=0, sc=8) 00:29:26.631 the state(6) to be set 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 [2024-10-01 01:47:06.050666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 [2024-10-01 01:47:06.051880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.631 Write completed with error (sct=0, sc=8) 00:29:26.631 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 [2024-10-01 01:47:06.053548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.632 NVMe io qpair process completion error 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 [2024-10-01 01:47:06.054671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 [2024-10-01 01:47:06.055718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.632 Write completed with error (sct=0, sc=8) 00:29:26.632 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 [2024-10-01 01:47:06.056932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 [2024-10-01 01:47:06.058971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.633 NVMe io qpair process completion error 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.633 starting I/O failed: -6 00:29:26.633 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 [2024-10-01 01:47:06.060209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.634 starting I/O failed: -6 00:29:26.634 starting I/O failed: -6 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 [2024-10-01 01:47:06.061308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 [2024-10-01 01:47:06.062505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.634 Write completed with error (sct=0, sc=8) 00:29:26.634 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 [2024-10-01 01:47:06.064409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.635 NVMe io qpair process completion error 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 [2024-10-01 01:47:06.066519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.635 Write completed with error (sct=0, sc=8) 00:29:26.635 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 [2024-10-01 01:47:06.067744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 [2024-10-01 01:47:06.070193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.636 NVMe io qpair process completion error 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 Write completed with error (sct=0, sc=8) 00:29:26.636 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 [2024-10-01 01:47:06.071362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 [2024-10-01 01:47:06.072403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 [2024-10-01 01:47:06.073554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.637 starting I/O failed: -6 00:29:26.637 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 [2024-10-01 01:47:06.076650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.638 NVMe io qpair process completion error 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 [2024-10-01 01:47:06.077883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 [2024-10-01 01:47:06.078933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.638 Write completed with error (sct=0, sc=8) 00:29:26.638 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 [2024-10-01 01:47:06.080036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 [2024-10-01 01:47:06.081985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.639 NVMe io qpair process completion error 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 starting I/O failed: -6 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.639 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 [2024-10-01 01:47:06.084051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 [2024-10-01 01:47:06.085247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.640 starting I/O failed: -6 00:29:26.640 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 [2024-10-01 01:47:06.087524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.641 NVMe io qpair process completion error 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.641 Write completed with error (sct=0, sc=8) 00:29:26.641 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 [2024-10-01 01:47:06.090248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 Write completed with error (sct=0, sc=8) 00:29:26.642 starting I/O failed: -6 00:29:26.642 [2024-10-01 01:47:06.094212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.642 NVMe io qpair process completion error 00:29:26.642 Initializing NVMe Controllers 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:26.642 Controller IO queue size 128, less than required. 00:29:26.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:26.643 Controller IO queue size 128, less than required. 00:29:26.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:26.643 Controller IO queue size 128, less than required. 00:29:26.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:26.643 Controller IO queue size 128, less than required. 00:29:26.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:26.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:26.643 Initialization complete. Launching workers. 00:29:26.643 ======================================================== 00:29:26.643 Latency(us) 00:29:26.643 Device Information : IOPS MiB/s Average min max 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1800.53 77.37 70318.42 1253.84 121067.80 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1783.17 76.62 71024.15 889.77 142819.38 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1820.00 78.20 70229.01 627.11 142333.15 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1808.78 77.72 70030.31 787.28 115265.63 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1812.81 77.89 69894.25 876.35 113437.12 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1789.95 76.91 70813.00 792.16 116333.32 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1796.72 77.20 70566.95 936.10 119796.27 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1882.86 80.90 67372.67 1038.86 108867.37 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1775.13 76.28 71498.37 1085.64 126118.77 00:29:26.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1804.98 77.56 70337.30 1188.66 129439.20 00:29:26.643 ======================================================== 00:29:26.643 Total : 18074.94 776.66 70192.26 627.11 142819.38 00:29:26.643 00:29:26.643 [2024-10-01 01:47:06.098845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e4160 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.098943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de1c0 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0530 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de6d0 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0200 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0860 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ddb20 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dfed0 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dea00 is same with the state(6) to be set 00:29:26.643 [2024-10-01 01:47:06.099436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15de3a0 is same with the state(6) to be set 00:29:26.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:26.901 01:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 990487 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 990487 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 990487 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.841 rmmod nvme_tcp 00:29:27.841 rmmod nvme_fabrics 00:29:27.841 rmmod nvme_keyring 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 990315 ']' 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 990315 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 990315 ']' 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 990315 00:29:27.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (990315) - No such process 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 990315 is not found' 00:29:27.841 Process with pid 990315 is not found 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.841 01:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.378 00:29:30.378 real 0m9.776s 00:29:30.378 user 0m22.272s 00:29:30.378 sys 0m6.224s 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:30.378 ************************************ 00:29:30.378 END TEST nvmf_shutdown_tc4 00:29:30.378 ************************************ 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:30.378 00:29:30.378 real 0m37.311s 00:29:30.378 user 1m38.563s 00:29:30.378 sys 0m12.759s 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.378 ************************************ 00:29:30.378 END TEST nvmf_shutdown 00:29:30.378 ************************************ 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:30.378 00:29:30.378 real 18m2.544s 00:29:30.378 user 50m11.664s 00:29:30.378 sys 3m57.358s 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.378 01:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:30.378 ************************************ 00:29:30.378 END TEST nvmf_target_extra 00:29:30.378 ************************************ 00:29:30.378 01:47:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:30.378 01:47:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:30.378 01:47:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:30.378 01:47:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.378 ************************************ 00:29:30.378 START TEST nvmf_host 00:29:30.378 ************************************ 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:30.378 * Looking for test storage... 00:29:30.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.378 --rc genhtml_branch_coverage=1 00:29:30.378 --rc genhtml_function_coverage=1 00:29:30.378 --rc genhtml_legend=1 00:29:30.378 --rc geninfo_all_blocks=1 00:29:30.378 --rc geninfo_unexecuted_blocks=1 00:29:30.378 00:29:30.378 ' 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.378 --rc genhtml_branch_coverage=1 00:29:30.378 --rc genhtml_function_coverage=1 00:29:30.378 --rc genhtml_legend=1 00:29:30.378 --rc geninfo_all_blocks=1 00:29:30.378 --rc geninfo_unexecuted_blocks=1 00:29:30.378 00:29:30.378 ' 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.378 --rc genhtml_branch_coverage=1 00:29:30.378 --rc genhtml_function_coverage=1 00:29:30.378 --rc genhtml_legend=1 00:29:30.378 --rc geninfo_all_blocks=1 00:29:30.378 --rc geninfo_unexecuted_blocks=1 00:29:30.378 00:29:30.378 ' 00:29:30.378 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:30.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.378 --rc genhtml_branch_coverage=1 00:29:30.378 --rc genhtml_function_coverage=1 00:29:30.378 --rc genhtml_legend=1 00:29:30.378 --rc geninfo_all_blocks=1 00:29:30.378 --rc geninfo_unexecuted_blocks=1 00:29:30.378 00:29:30.378 ' 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:30.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.379 ************************************ 00:29:30.379 START TEST nvmf_multicontroller 00:29:30.379 ************************************ 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:30.379 * Looking for test storage... 00:29:30.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:29:30.379 01:47:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:30.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.379 --rc genhtml_branch_coverage=1 00:29:30.379 --rc genhtml_function_coverage=1 00:29:30.379 --rc genhtml_legend=1 00:29:30.379 --rc geninfo_all_blocks=1 00:29:30.379 --rc geninfo_unexecuted_blocks=1 00:29:30.379 00:29:30.379 ' 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:30.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.379 --rc genhtml_branch_coverage=1 00:29:30.379 --rc genhtml_function_coverage=1 00:29:30.379 --rc genhtml_legend=1 00:29:30.379 --rc geninfo_all_blocks=1 00:29:30.379 --rc geninfo_unexecuted_blocks=1 00:29:30.379 00:29:30.379 ' 00:29:30.379 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:30.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.380 --rc genhtml_branch_coverage=1 00:29:30.380 --rc genhtml_function_coverage=1 00:29:30.380 --rc genhtml_legend=1 00:29:30.380 --rc geninfo_all_blocks=1 00:29:30.380 --rc geninfo_unexecuted_blocks=1 00:29:30.380 00:29:30.380 ' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.380 --rc genhtml_branch_coverage=1 00:29:30.380 --rc genhtml_function_coverage=1 00:29:30.380 --rc genhtml_legend=1 00:29:30.380 --rc geninfo_all_blocks=1 00:29:30.380 --rc geninfo_unexecuted_blocks=1 00:29:30.380 00:29:30.380 ' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:30.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.380 01:47:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:32.917 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:32.917 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.917 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:32.918 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:32.918 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:29:32.918 00:29:32.918 --- 10.0.0.2 ping statistics --- 00:29:32.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.918 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:29:32.918 00:29:32.918 --- 10.0.0.1 ping statistics --- 00:29:32.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.918 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=993291 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 993291 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 993291 ']' 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.918 [2024-10-01 01:47:12.481100] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:32.918 [2024-10-01 01:47:12.481173] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.918 [2024-10-01 01:47:12.548208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.918 [2024-10-01 01:47:12.634814] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.918 [2024-10-01 01:47:12.634885] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.918 [2024-10-01 01:47:12.634908] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.918 [2024-10-01 01:47:12.634919] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.918 [2024-10-01 01:47:12.634929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.918 [2024-10-01 01:47:12.635034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.918 [2024-10-01 01:47:12.635094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.918 [2024-10-01 01:47:12.635098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.918 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 [2024-10-01 01:47:12.771689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 Malloc0 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 [2024-10-01 01:47:12.838685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 [2024-10-01 01:47:12.846538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.177 Malloc1 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.177 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=993320 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 993320 /var/tmp/bdevperf.sock 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 993320 ']' 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.178 01:47:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.438 NVMe0n1 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.438 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.699 1 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.699 request: 00:29:33.699 { 00:29:33.699 "name": "NVMe0", 00:29:33.699 "trtype": "tcp", 00:29:33.699 "traddr": "10.0.0.2", 00:29:33.699 "adrfam": "ipv4", 00:29:33.699 "trsvcid": "4420", 00:29:33.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.699 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:33.699 "hostaddr": "10.0.0.1", 00:29:33.699 "prchk_reftag": false, 00:29:33.699 "prchk_guard": false, 00:29:33.699 "hdgst": false, 00:29:33.699 "ddgst": false, 00:29:33.699 "allow_unrecognized_csi": false, 00:29:33.699 "method": "bdev_nvme_attach_controller", 00:29:33.699 "req_id": 1 00:29:33.699 } 00:29:33.699 Got JSON-RPC error response 00:29:33.699 response: 00:29:33.699 { 00:29:33.699 "code": -114, 00:29:33.699 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:33.699 } 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.699 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.699 request: 00:29:33.699 { 00:29:33.699 "name": "NVMe0", 00:29:33.699 "trtype": "tcp", 00:29:33.699 "traddr": "10.0.0.2", 00:29:33.699 "adrfam": "ipv4", 00:29:33.699 "trsvcid": "4420", 00:29:33.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:33.700 "hostaddr": "10.0.0.1", 00:29:33.700 "prchk_reftag": false, 00:29:33.700 "prchk_guard": false, 00:29:33.700 "hdgst": false, 00:29:33.700 "ddgst": false, 00:29:33.700 "allow_unrecognized_csi": false, 00:29:33.700 "method": "bdev_nvme_attach_controller", 00:29:33.700 "req_id": 1 00:29:33.700 } 00:29:33.700 Got JSON-RPC error response 00:29:33.700 response: 00:29:33.700 { 00:29:33.700 "code": -114, 00:29:33.700 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:33.700 } 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.700 request: 00:29:33.700 { 00:29:33.700 "name": "NVMe0", 00:29:33.700 "trtype": "tcp", 00:29:33.700 "traddr": "10.0.0.2", 00:29:33.700 "adrfam": "ipv4", 00:29:33.700 "trsvcid": "4420", 00:29:33.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.700 "hostaddr": "10.0.0.1", 00:29:33.700 "prchk_reftag": false, 00:29:33.700 "prchk_guard": false, 00:29:33.700 "hdgst": false, 00:29:33.700 "ddgst": false, 00:29:33.700 "multipath": "disable", 00:29:33.700 "allow_unrecognized_csi": false, 00:29:33.700 "method": "bdev_nvme_attach_controller", 00:29:33.700 "req_id": 1 00:29:33.700 } 00:29:33.700 Got JSON-RPC error response 00:29:33.700 response: 00:29:33.700 { 00:29:33.700 "code": -114, 00:29:33.700 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:33.700 } 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.700 request: 00:29:33.700 { 00:29:33.700 "name": "NVMe0", 00:29:33.700 "trtype": "tcp", 00:29:33.700 "traddr": "10.0.0.2", 00:29:33.700 "adrfam": "ipv4", 00:29:33.700 "trsvcid": "4420", 00:29:33.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.700 "hostaddr": "10.0.0.1", 00:29:33.700 "prchk_reftag": false, 00:29:33.700 "prchk_guard": false, 00:29:33.700 "hdgst": false, 00:29:33.700 "ddgst": false, 00:29:33.700 "multipath": "failover", 00:29:33.700 "allow_unrecognized_csi": false, 00:29:33.700 "method": "bdev_nvme_attach_controller", 00:29:33.700 "req_id": 1 00:29:33.700 } 00:29:33.700 Got JSON-RPC error response 00:29:33.700 response: 00:29:33.700 { 00:29:33.700 "code": -114, 00:29:33.700 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:33.700 } 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.700 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.959 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.959 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:33.959 01:47:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:35.336 { 00:29:35.336 "results": [ 00:29:35.336 { 00:29:35.336 "job": "NVMe0n1", 00:29:35.336 "core_mask": "0x1", 00:29:35.336 "workload": "write", 00:29:35.336 "status": "finished", 00:29:35.336 "queue_depth": 128, 00:29:35.336 "io_size": 4096, 00:29:35.336 "runtime": 1.010435, 00:29:35.336 "iops": 18855.245513071102, 00:29:35.336 "mibps": 73.653302785434, 00:29:35.336 "io_failed": 0, 00:29:35.336 "io_timeout": 0, 00:29:35.336 "avg_latency_us": 6778.29538495035, 00:29:35.336 "min_latency_us": 3373.8903703703704, 00:29:35.336 "max_latency_us": 13301.38074074074 00:29:35.336 } 00:29:35.336 ], 00:29:35.336 "core_count": 1 00:29:35.336 } 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 993320 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 993320 ']' 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 993320 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993320 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993320' 00:29:35.336 killing process with pid 993320 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 993320 00:29:35.336 01:47:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 993320 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:35.336 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:35.336 [2024-10-01 01:47:12.954113] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:35.336 [2024-10-01 01:47:12.954213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993320 ] 00:29:35.336 [2024-10-01 01:47:13.015444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.336 [2024-10-01 01:47:13.102272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.336 [2024-10-01 01:47:13.658584] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 3af6e2c2-2200-4a3d-9892-540f3b9ee067 already exists 00:29:35.336 [2024-10-01 01:47:13.658624] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:3af6e2c2-2200-4a3d-9892-540f3b9ee067 alias for bdev NVMe1n1 00:29:35.336 [2024-10-01 01:47:13.658640] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:35.336 Running I/O for 1 seconds... 00:29:35.336 18797.00 IOPS, 73.43 MiB/s 00:29:35.336 Latency(us) 00:29:35.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.336 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:35.336 NVMe0n1 : 1.01 18855.25 73.65 0.00 0.00 6778.30 3373.89 13301.38 00:29:35.336 =================================================================================================================== 00:29:35.336 Total : 18855.25 73.65 0.00 0.00 6778.30 3373.89 13301.38 00:29:35.336 Received shutdown signal, test time was about 1.000000 seconds 00:29:35.336 00:29:35.336 Latency(us) 00:29:35.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.336 =================================================================================================================== 00:29:35.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.336 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.336 rmmod nvme_tcp 00:29:35.336 rmmod nvme_fabrics 00:29:35.336 rmmod nvme_keyring 00:29:35.336 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 993291 ']' 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 993291 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 993291 ']' 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 993291 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 993291 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 993291' 00:29:35.597 killing process with pid 993291 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 993291 00:29:35.597 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 993291 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.856 01:47:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.762 01:47:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.762 00:29:37.762 real 0m7.659s 00:29:37.762 user 0m11.440s 00:29:37.762 sys 0m2.529s 00:29:37.762 01:47:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.762 01:47:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.762 ************************************ 00:29:37.762 END TEST nvmf_multicontroller 00:29:37.762 ************************************ 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.020 ************************************ 00:29:38.020 START TEST nvmf_aer 00:29:38.020 ************************************ 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:38.020 * Looking for test storage... 00:29:38.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:38.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.020 --rc genhtml_branch_coverage=1 00:29:38.020 --rc genhtml_function_coverage=1 00:29:38.020 --rc genhtml_legend=1 00:29:38.020 --rc geninfo_all_blocks=1 00:29:38.020 --rc geninfo_unexecuted_blocks=1 00:29:38.020 00:29:38.020 ' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:38.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.020 --rc genhtml_branch_coverage=1 00:29:38.020 --rc genhtml_function_coverage=1 00:29:38.020 --rc genhtml_legend=1 00:29:38.020 --rc geninfo_all_blocks=1 00:29:38.020 --rc geninfo_unexecuted_blocks=1 00:29:38.020 00:29:38.020 ' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:38.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.020 --rc genhtml_branch_coverage=1 00:29:38.020 --rc genhtml_function_coverage=1 00:29:38.020 --rc genhtml_legend=1 00:29:38.020 --rc geninfo_all_blocks=1 00:29:38.020 --rc geninfo_unexecuted_blocks=1 00:29:38.020 00:29:38.020 ' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:38.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.020 --rc genhtml_branch_coverage=1 00:29:38.020 --rc genhtml_function_coverage=1 00:29:38.020 --rc genhtml_legend=1 00:29:38.020 --rc geninfo_all_blocks=1 00:29:38.020 --rc geninfo_unexecuted_blocks=1 00:29:38.020 00:29:38.020 ' 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.020 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:38.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.021 01:47:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:40.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:40.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:40.549 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:40.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:40.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:29:40.550 00:29:40.550 --- 10.0.0.2 ping statistics --- 00:29:40.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.550 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:29:40.550 00:29:40.550 --- 10.0.0.1 ping statistics --- 00:29:40.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.550 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=995537 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 995537 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 995537 ']' 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.550 01:47:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 [2024-10-01 01:47:20.035619] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:40.550 [2024-10-01 01:47:20.035709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.550 [2024-10-01 01:47:20.105084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.550 [2024-10-01 01:47:20.194793] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.550 [2024-10-01 01:47:20.194848] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.550 [2024-10-01 01:47:20.194871] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.550 [2024-10-01 01:47:20.194897] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.550 [2024-10-01 01:47:20.194906] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.550 [2024-10-01 01:47:20.195039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.550 [2024-10-01 01:47:20.195080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.550 [2024-10-01 01:47:20.195105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.550 [2024-10-01 01:47:20.195108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 [2024-10-01 01:47:20.342212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 Malloc0 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.550 [2024-10-01 01:47:20.393126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.550 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.551 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:40.551 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.551 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:40.809 [ 00:29:40.809 { 00:29:40.809 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:40.809 "subtype": "Discovery", 00:29:40.809 "listen_addresses": [], 00:29:40.809 "allow_any_host": true, 00:29:40.809 "hosts": [] 00:29:40.809 }, 00:29:40.809 { 00:29:40.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:40.809 "subtype": "NVMe", 00:29:40.809 "listen_addresses": [ 00:29:40.809 { 00:29:40.809 "trtype": "TCP", 00:29:40.809 "adrfam": "IPv4", 00:29:40.809 "traddr": "10.0.0.2", 00:29:40.809 "trsvcid": "4420" 00:29:40.809 } 00:29:40.809 ], 00:29:40.809 "allow_any_host": true, 00:29:40.809 "hosts": [], 00:29:40.809 "serial_number": "SPDK00000000000001", 00:29:40.809 "model_number": "SPDK bdev Controller", 00:29:40.809 "max_namespaces": 2, 00:29:40.809 "min_cntlid": 1, 00:29:40.809 "max_cntlid": 65519, 00:29:40.809 "namespaces": [ 00:29:40.809 { 00:29:40.809 "nsid": 1, 00:29:40.809 "bdev_name": "Malloc0", 00:29:40.809 "name": "Malloc0", 00:29:40.809 "nguid": "9B25B0861B08460C825CE8A34A46C99A", 00:29:40.809 "uuid": "9b25b086-1b08-460c-825c-e8a34a46c99a" 00:29:40.809 } 00:29:40.809 ] 00:29:40.809 } 00:29:40.809 ] 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=995681 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:40.809 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.067 Malloc1 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.067 [ 00:29:41.067 { 00:29:41.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:41.067 "subtype": "Discovery", 00:29:41.067 "listen_addresses": [], 00:29:41.067 "allow_any_host": true, 00:29:41.067 "hosts": [] 00:29:41.067 }, 00:29:41.067 { 00:29:41.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.067 "subtype": "NVMe", 00:29:41.067 "listen_addresses": [ 00:29:41.067 { 00:29:41.067 "trtype": "TCP", 00:29:41.067 "adrfam": "IPv4", 00:29:41.067 "traddr": "10.0.0.2", 00:29:41.067 "trsvcid": "4420" 00:29:41.067 } 00:29:41.067 ], 00:29:41.067 "allow_any_host": true, 00:29:41.067 "hosts": [], 00:29:41.067 "serial_number": "SPDK00000000000001", 00:29:41.067 "model_number": "SPDK bdev Controller", 00:29:41.067 "max_namespaces": 2, 00:29:41.067 "min_cntlid": 1, 00:29:41.067 "max_cntlid": 65519, 00:29:41.067 "namespaces": [ 00:29:41.067 { 00:29:41.067 "nsid": 1, 00:29:41.067 "bdev_name": "Malloc0", 00:29:41.067 "name": "Malloc0", 00:29:41.067 "nguid": "9B25B0861B08460C825CE8A34A46C99A", 00:29:41.067 "uuid": "9b25b086-1b08-460c-825c-e8a34a46c99a" 00:29:41.067 }, 00:29:41.067 { 00:29:41.067 "nsid": 2, 00:29:41.067 "bdev_name": "Malloc1", 00:29:41.067 Asynchronous Event Request test 00:29:41.067 Attaching to 10.0.0.2 00:29:41.067 Attached to 10.0.0.2 00:29:41.067 Registering asynchronous event callbacks... 00:29:41.067 Starting namespace attribute notice tests for all controllers... 00:29:41.067 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:41.067 aer_cb - Changed Namespace 00:29:41.067 Cleaning up... 00:29:41.067 "name": "Malloc1", 00:29:41.067 "nguid": "E845CBFFEBC94EBFA81FAE1D1A23055C", 00:29:41.067 "uuid": "e845cbff-ebc9-4ebf-a81f-ae1d1a23055c" 00:29:41.067 } 00:29:41.067 ] 00:29:41.067 } 00:29:41.067 ] 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 995681 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.067 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.068 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.068 rmmod nvme_tcp 00:29:41.068 rmmod nvme_fabrics 00:29:41.068 rmmod nvme_keyring 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 995537 ']' 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 995537 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 995537 ']' 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 995537 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995537 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995537' 00:29:41.327 killing process with pid 995537 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 995537 00:29:41.327 01:47:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 995537 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.587 01:47:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.486 00:29:43.486 real 0m5.636s 00:29:43.486 user 0m4.819s 00:29:43.486 sys 0m1.960s 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.486 ************************************ 00:29:43.486 END TEST nvmf_aer 00:29:43.486 ************************************ 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:43.486 01:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.744 ************************************ 00:29:43.744 START TEST nvmf_async_init 00:29:43.744 ************************************ 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:43.744 * Looking for test storage... 00:29:43.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.744 --rc genhtml_branch_coverage=1 00:29:43.744 --rc genhtml_function_coverage=1 00:29:43.744 --rc genhtml_legend=1 00:29:43.744 --rc geninfo_all_blocks=1 00:29:43.744 --rc geninfo_unexecuted_blocks=1 00:29:43.744 00:29:43.744 ' 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.744 --rc genhtml_branch_coverage=1 00:29:43.744 --rc genhtml_function_coverage=1 00:29:43.744 --rc genhtml_legend=1 00:29:43.744 --rc geninfo_all_blocks=1 00:29:43.744 --rc geninfo_unexecuted_blocks=1 00:29:43.744 00:29:43.744 ' 00:29:43.744 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.744 --rc genhtml_branch_coverage=1 00:29:43.744 --rc genhtml_function_coverage=1 00:29:43.744 --rc genhtml_legend=1 00:29:43.744 --rc geninfo_all_blocks=1 00:29:43.745 --rc geninfo_unexecuted_blocks=1 00:29:43.745 00:29:43.745 ' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:43.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.745 --rc genhtml_branch_coverage=1 00:29:43.745 --rc genhtml_function_coverage=1 00:29:43.745 --rc genhtml_legend=1 00:29:43.745 --rc geninfo_all_blocks=1 00:29:43.745 --rc geninfo_unexecuted_blocks=1 00:29:43.745 00:29:43.745 ' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:43.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b0ee795ac63247978b1967510e14c2a2 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.745 01:47:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:46.283 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:46.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:46.283 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:46.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:46.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:29:46.284 00:29:46.284 --- 10.0.0.2 ping statistics --- 00:29:46.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.284 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:29:46.284 00:29:46.284 --- 10.0.0.1 ping statistics --- 00:29:46.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.284 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=997628 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 997628 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 997628 ']' 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.284 01:47:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 [2024-10-01 01:47:25.751474] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:46.284 [2024-10-01 01:47:25.751552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.284 [2024-10-01 01:47:25.822335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.284 [2024-10-01 01:47:25.917864] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.284 [2024-10-01 01:47:25.917945] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.284 [2024-10-01 01:47:25.917959] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.284 [2024-10-01 01:47:25.917970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.284 [2024-10-01 01:47:25.917980] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.284 [2024-10-01 01:47:25.918037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 [2024-10-01 01:47:26.061797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 null0 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.284 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b0ee795ac63247978b1967510e14c2a2 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.285 [2024-10-01 01:47:26.102077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.285 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.543 nvme0n1 00:29:46.543 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.543 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:46.543 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.543 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.543 [ 00:29:46.543 { 00:29:46.543 "name": "nvme0n1", 00:29:46.543 "aliases": [ 00:29:46.543 "b0ee795a-c632-4797-8b19-67510e14c2a2" 00:29:46.543 ], 00:29:46.543 "product_name": "NVMe disk", 00:29:46.543 "block_size": 512, 00:29:46.543 "num_blocks": 2097152, 00:29:46.543 "uuid": "b0ee795a-c632-4797-8b19-67510e14c2a2", 00:29:46.543 "numa_id": 0, 00:29:46.543 "assigned_rate_limits": { 00:29:46.544 "rw_ios_per_sec": 0, 00:29:46.544 "rw_mbytes_per_sec": 0, 00:29:46.544 "r_mbytes_per_sec": 0, 00:29:46.544 "w_mbytes_per_sec": 0 00:29:46.544 }, 00:29:46.544 "claimed": false, 00:29:46.544 "zoned": false, 00:29:46.544 "supported_io_types": { 00:29:46.544 "read": true, 00:29:46.544 "write": true, 00:29:46.544 "unmap": false, 00:29:46.544 "flush": true, 00:29:46.544 "reset": true, 00:29:46.544 "nvme_admin": true, 00:29:46.544 "nvme_io": true, 00:29:46.544 "nvme_io_md": false, 00:29:46.544 "write_zeroes": true, 00:29:46.544 "zcopy": false, 00:29:46.544 "get_zone_info": false, 00:29:46.544 "zone_management": false, 00:29:46.544 "zone_append": false, 00:29:46.544 "compare": true, 00:29:46.544 "compare_and_write": true, 00:29:46.544 "abort": true, 00:29:46.544 "seek_hole": false, 00:29:46.544 "seek_data": false, 00:29:46.544 "copy": true, 00:29:46.544 "nvme_iov_md": false 00:29:46.544 }, 00:29:46.544 "memory_domains": [ 00:29:46.544 { 00:29:46.544 "dma_device_id": "system", 00:29:46.544 "dma_device_type": 1 00:29:46.544 } 00:29:46.544 ], 00:29:46.544 "driver_specific": { 00:29:46.544 "nvme": [ 00:29:46.544 { 00:29:46.544 "trid": { 00:29:46.544 "trtype": "TCP", 00:29:46.544 "adrfam": "IPv4", 00:29:46.544 "traddr": "10.0.0.2", 00:29:46.544 "trsvcid": "4420", 00:29:46.544 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:46.544 }, 00:29:46.544 "ctrlr_data": { 00:29:46.544 "cntlid": 1, 00:29:46.544 "vendor_id": "0x8086", 00:29:46.544 "model_number": "SPDK bdev Controller", 00:29:46.544 "serial_number": "00000000000000000000", 00:29:46.544 "firmware_revision": "25.01", 00:29:46.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.544 "oacs": { 00:29:46.544 "security": 0, 00:29:46.544 "format": 0, 00:29:46.544 "firmware": 0, 00:29:46.544 "ns_manage": 0 00:29:46.544 }, 00:29:46.544 "multi_ctrlr": true, 00:29:46.544 "ana_reporting": false 00:29:46.544 }, 00:29:46.544 "vs": { 00:29:46.544 "nvme_version": "1.3" 00:29:46.544 }, 00:29:46.544 "ns_data": { 00:29:46.544 "id": 1, 00:29:46.544 "can_share": true 00:29:46.544 } 00:29:46.544 } 00:29:46.544 ], 00:29:46.544 "mp_policy": "active_passive" 00:29:46.544 } 00:29:46.544 } 00:29:46.544 ] 00:29:46.544 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.544 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:46.544 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.544 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.544 [2024-10-01 01:47:26.355570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:46.544 [2024-10-01 01:47:26.355659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dacaa0 (9): Bad file descriptor 00:29:46.802 [2024-10-01 01:47:26.500188] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:46.802 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.802 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:46.802 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.802 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.802 [ 00:29:46.802 { 00:29:46.802 "name": "nvme0n1", 00:29:46.802 "aliases": [ 00:29:46.802 "b0ee795a-c632-4797-8b19-67510e14c2a2" 00:29:46.802 ], 00:29:46.802 "product_name": "NVMe disk", 00:29:46.802 "block_size": 512, 00:29:46.802 "num_blocks": 2097152, 00:29:46.802 "uuid": "b0ee795a-c632-4797-8b19-67510e14c2a2", 00:29:46.802 "numa_id": 0, 00:29:46.802 "assigned_rate_limits": { 00:29:46.802 "rw_ios_per_sec": 0, 00:29:46.802 "rw_mbytes_per_sec": 0, 00:29:46.802 "r_mbytes_per_sec": 0, 00:29:46.802 "w_mbytes_per_sec": 0 00:29:46.802 }, 00:29:46.802 "claimed": false, 00:29:46.802 "zoned": false, 00:29:46.802 "supported_io_types": { 00:29:46.802 "read": true, 00:29:46.802 "write": true, 00:29:46.802 "unmap": false, 00:29:46.802 "flush": true, 00:29:46.802 "reset": true, 00:29:46.802 "nvme_admin": true, 00:29:46.802 "nvme_io": true, 00:29:46.802 "nvme_io_md": false, 00:29:46.802 "write_zeroes": true, 00:29:46.802 "zcopy": false, 00:29:46.802 "get_zone_info": false, 00:29:46.802 "zone_management": false, 00:29:46.802 "zone_append": false, 00:29:46.802 "compare": true, 00:29:46.802 "compare_and_write": true, 00:29:46.802 "abort": true, 00:29:46.802 "seek_hole": false, 00:29:46.802 "seek_data": false, 00:29:46.802 "copy": true, 00:29:46.803 "nvme_iov_md": false 00:29:46.803 }, 00:29:46.803 "memory_domains": [ 00:29:46.803 { 00:29:46.803 "dma_device_id": "system", 00:29:46.803 "dma_device_type": 1 00:29:46.803 } 00:29:46.803 ], 00:29:46.803 "driver_specific": { 00:29:46.803 "nvme": [ 00:29:46.803 { 00:29:46.803 "trid": { 00:29:46.803 "trtype": "TCP", 00:29:46.803 "adrfam": "IPv4", 00:29:46.803 "traddr": "10.0.0.2", 00:29:46.803 "trsvcid": "4420", 00:29:46.803 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:46.803 }, 00:29:46.803 "ctrlr_data": { 00:29:46.803 "cntlid": 2, 00:29:46.803 "vendor_id": "0x8086", 00:29:46.803 "model_number": "SPDK bdev Controller", 00:29:46.803 "serial_number": "00000000000000000000", 00:29:46.803 "firmware_revision": "25.01", 00:29:46.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.803 "oacs": { 00:29:46.803 "security": 0, 00:29:46.803 "format": 0, 00:29:46.803 "firmware": 0, 00:29:46.803 "ns_manage": 0 00:29:46.803 }, 00:29:46.803 "multi_ctrlr": true, 00:29:46.803 "ana_reporting": false 00:29:46.803 }, 00:29:46.803 "vs": { 00:29:46.803 "nvme_version": "1.3" 00:29:46.803 }, 00:29:46.803 "ns_data": { 00:29:46.803 "id": 1, 00:29:46.803 "can_share": true 00:29:46.803 } 00:29:46.803 } 00:29:46.803 ], 00:29:46.803 "mp_policy": "active_passive" 00:29:46.803 } 00:29:46.803 } 00:29:46.803 ] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.qGw5fGmCCI 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.qGw5fGmCCI 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.qGw5fGmCCI 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.803 [2024-10-01 01:47:26.560300] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:46.803 [2024-10-01 01:47:26.560431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:46.803 [2024-10-01 01:47:26.576357] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:46.803 nvme0n1 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.803 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.063 [ 00:29:47.063 { 00:29:47.063 "name": "nvme0n1", 00:29:47.063 "aliases": [ 00:29:47.063 "b0ee795a-c632-4797-8b19-67510e14c2a2" 00:29:47.063 ], 00:29:47.063 "product_name": "NVMe disk", 00:29:47.063 "block_size": 512, 00:29:47.063 "num_blocks": 2097152, 00:29:47.063 "uuid": "b0ee795a-c632-4797-8b19-67510e14c2a2", 00:29:47.063 "numa_id": 0, 00:29:47.063 "assigned_rate_limits": { 00:29:47.063 "rw_ios_per_sec": 0, 00:29:47.063 "rw_mbytes_per_sec": 0, 00:29:47.063 "r_mbytes_per_sec": 0, 00:29:47.063 "w_mbytes_per_sec": 0 00:29:47.063 }, 00:29:47.063 "claimed": false, 00:29:47.063 "zoned": false, 00:29:47.063 "supported_io_types": { 00:29:47.063 "read": true, 00:29:47.063 "write": true, 00:29:47.063 "unmap": false, 00:29:47.063 "flush": true, 00:29:47.063 "reset": true, 00:29:47.063 "nvme_admin": true, 00:29:47.063 "nvme_io": true, 00:29:47.063 "nvme_io_md": false, 00:29:47.063 "write_zeroes": true, 00:29:47.063 "zcopy": false, 00:29:47.063 "get_zone_info": false, 00:29:47.063 "zone_management": false, 00:29:47.063 "zone_append": false, 00:29:47.063 "compare": true, 00:29:47.063 "compare_and_write": true, 00:29:47.063 "abort": true, 00:29:47.063 "seek_hole": false, 00:29:47.063 "seek_data": false, 00:29:47.063 "copy": true, 00:29:47.063 "nvme_iov_md": false 00:29:47.063 }, 00:29:47.063 "memory_domains": [ 00:29:47.063 { 00:29:47.063 "dma_device_id": "system", 00:29:47.063 "dma_device_type": 1 00:29:47.063 } 00:29:47.063 ], 00:29:47.063 "driver_specific": { 00:29:47.063 "nvme": [ 00:29:47.063 { 00:29:47.063 "trid": { 00:29:47.063 "trtype": "TCP", 00:29:47.063 "adrfam": "IPv4", 00:29:47.063 "traddr": "10.0.0.2", 00:29:47.063 "trsvcid": "4421", 00:29:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.063 }, 00:29:47.063 "ctrlr_data": { 00:29:47.063 "cntlid": 3, 00:29:47.063 "vendor_id": "0x8086", 00:29:47.063 "model_number": "SPDK bdev Controller", 00:29:47.063 "serial_number": "00000000000000000000", 00:29:47.063 "firmware_revision": "25.01", 00:29:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.063 "oacs": { 00:29:47.063 "security": 0, 00:29:47.063 "format": 0, 00:29:47.063 "firmware": 0, 00:29:47.063 "ns_manage": 0 00:29:47.063 }, 00:29:47.063 "multi_ctrlr": true, 00:29:47.063 "ana_reporting": false 00:29:47.063 }, 00:29:47.063 "vs": { 00:29:47.063 "nvme_version": "1.3" 00:29:47.063 }, 00:29:47.063 "ns_data": { 00:29:47.063 "id": 1, 00:29:47.063 "can_share": true 00:29:47.063 } 00:29:47.063 } 00:29:47.063 ], 00:29:47.063 "mp_policy": "active_passive" 00:29:47.063 } 00:29:47.063 } 00:29:47.063 ] 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.qGw5fGmCCI 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.063 rmmod nvme_tcp 00:29:47.063 rmmod nvme_fabrics 00:29:47.063 rmmod nvme_keyring 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 997628 ']' 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 997628 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 997628 ']' 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 997628 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 997628 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 997628' 00:29:47.063 killing process with pid 997628 00:29:47.063 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 997628 00:29:47.064 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 997628 00:29:47.324 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:47.324 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:47.324 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:47.324 01:47:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.324 01:47:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.232 01:47:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.232 00:29:49.232 real 0m5.722s 00:29:49.232 user 0m2.250s 00:29:49.232 sys 0m1.906s 00:29:49.232 01:47:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:49.232 01:47:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.232 ************************************ 00:29:49.232 END TEST nvmf_async_init 00:29:49.232 ************************************ 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.491 ************************************ 00:29:49.491 START TEST dma 00:29:49.491 ************************************ 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:49.491 * Looking for test storage... 00:29:49.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:49.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.491 --rc genhtml_branch_coverage=1 00:29:49.491 --rc genhtml_function_coverage=1 00:29:49.491 --rc genhtml_legend=1 00:29:49.491 --rc geninfo_all_blocks=1 00:29:49.491 --rc geninfo_unexecuted_blocks=1 00:29:49.491 00:29:49.491 ' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:49.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.491 --rc genhtml_branch_coverage=1 00:29:49.491 --rc genhtml_function_coverage=1 00:29:49.491 --rc genhtml_legend=1 00:29:49.491 --rc geninfo_all_blocks=1 00:29:49.491 --rc geninfo_unexecuted_blocks=1 00:29:49.491 00:29:49.491 ' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:49.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.491 --rc genhtml_branch_coverage=1 00:29:49.491 --rc genhtml_function_coverage=1 00:29:49.491 --rc genhtml_legend=1 00:29:49.491 --rc geninfo_all_blocks=1 00:29:49.491 --rc geninfo_unexecuted_blocks=1 00:29:49.491 00:29:49.491 ' 00:29:49.491 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:49.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.491 --rc genhtml_branch_coverage=1 00:29:49.491 --rc genhtml_function_coverage=1 00:29:49.491 --rc genhtml_legend=1 00:29:49.492 --rc geninfo_all_blocks=1 00:29:49.492 --rc geninfo_unexecuted_blocks=1 00:29:49.492 00:29:49.492 ' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:49.492 00:29:49.492 real 0m0.158s 00:29:49.492 user 0m0.103s 00:29:49.492 sys 0m0.063s 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:49.492 ************************************ 00:29:49.492 END TEST dma 00:29:49.492 ************************************ 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.492 ************************************ 00:29:49.492 START TEST nvmf_identify 00:29:49.492 ************************************ 00:29:49.492 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:49.751 * Looking for test storage... 00:29:49.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:49.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.751 --rc genhtml_branch_coverage=1 00:29:49.751 --rc genhtml_function_coverage=1 00:29:49.751 --rc genhtml_legend=1 00:29:49.751 --rc geninfo_all_blocks=1 00:29:49.751 --rc geninfo_unexecuted_blocks=1 00:29:49.751 00:29:49.751 ' 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:49.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.751 --rc genhtml_branch_coverage=1 00:29:49.751 --rc genhtml_function_coverage=1 00:29:49.751 --rc genhtml_legend=1 00:29:49.751 --rc geninfo_all_blocks=1 00:29:49.751 --rc geninfo_unexecuted_blocks=1 00:29:49.751 00:29:49.751 ' 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:49.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.751 --rc genhtml_branch_coverage=1 00:29:49.751 --rc genhtml_function_coverage=1 00:29:49.751 --rc genhtml_legend=1 00:29:49.751 --rc geninfo_all_blocks=1 00:29:49.751 --rc geninfo_unexecuted_blocks=1 00:29:49.751 00:29:49.751 ' 00:29:49.751 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:49.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.751 --rc genhtml_branch_coverage=1 00:29:49.751 --rc genhtml_function_coverage=1 00:29:49.751 --rc genhtml_legend=1 00:29:49.751 --rc geninfo_all_blocks=1 00:29:49.752 --rc geninfo_unexecuted_blocks=1 00:29:49.752 00:29:49.752 ' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.752 01:47:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:51.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:51.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:51.655 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:51.655 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.655 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.656 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:29:51.915 00:29:51.915 --- 10.0.0.2 ping statistics --- 00:29:51.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.915 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:51.915 00:29:51.915 --- 10.0.0.1 ping statistics --- 00:29:51.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.915 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=999795 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 999795 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 999795 ']' 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.915 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.915 [2024-10-01 01:47:31.630306] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:51.915 [2024-10-01 01:47:31.630413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.915 [2024-10-01 01:47:31.703465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.174 [2024-10-01 01:47:31.796621] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.174 [2024-10-01 01:47:31.796683] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.174 [2024-10-01 01:47:31.796709] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.174 [2024-10-01 01:47:31.796722] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.174 [2024-10-01 01:47:31.796733] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.174 [2024-10-01 01:47:31.796815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.174 [2024-10-01 01:47:31.796886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.174 [2024-10-01 01:47:31.796987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.174 [2024-10-01 01:47:31.796989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 [2024-10-01 01:47:31.928332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 Malloc0 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.174 01:47:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 [2024-10-01 01:47:31.999602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.174 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.174 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.174 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.174 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.174 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.175 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:52.175 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.175 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.175 [ 00:29:52.175 { 00:29:52.175 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:52.175 "subtype": "Discovery", 00:29:52.175 "listen_addresses": [ 00:29:52.175 { 00:29:52.175 "trtype": "TCP", 00:29:52.175 "adrfam": "IPv4", 00:29:52.175 "traddr": "10.0.0.2", 00:29:52.175 "trsvcid": "4420" 00:29:52.175 } 00:29:52.175 ], 00:29:52.175 "allow_any_host": true, 00:29:52.175 "hosts": [] 00:29:52.175 }, 00:29:52.175 { 00:29:52.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.175 "subtype": "NVMe", 00:29:52.175 "listen_addresses": [ 00:29:52.175 { 00:29:52.175 "trtype": "TCP", 00:29:52.175 "adrfam": "IPv4", 00:29:52.175 "traddr": "10.0.0.2", 00:29:52.175 "trsvcid": "4420" 00:29:52.175 } 00:29:52.175 ], 00:29:52.175 "allow_any_host": true, 00:29:52.175 "hosts": [], 00:29:52.175 "serial_number": "SPDK00000000000001", 00:29:52.175 "model_number": "SPDK bdev Controller", 00:29:52.175 "max_namespaces": 32, 00:29:52.175 "min_cntlid": 1, 00:29:52.175 "max_cntlid": 65519, 00:29:52.175 "namespaces": [ 00:29:52.175 { 00:29:52.175 "nsid": 1, 00:29:52.175 "bdev_name": "Malloc0", 00:29:52.175 "name": "Malloc0", 00:29:52.175 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:52.175 "eui64": "ABCDEF0123456789", 00:29:52.175 "uuid": "da034375-f8f1-44ad-8a9e-1ec9df285957" 00:29:52.175 } 00:29:52.175 ] 00:29:52.175 } 00:29:52.175 ] 00:29:52.175 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.175 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:52.436 [2024-10-01 01:47:32.039841] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:52.436 [2024-10-01 01:47:32.039882] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999917 ] 00:29:52.436 [2024-10-01 01:47:32.074245] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:52.436 [2024-10-01 01:47:32.074332] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:52.436 [2024-10-01 01:47:32.074343] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:52.436 [2024-10-01 01:47:32.074358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:52.436 [2024-10-01 01:47:32.074373] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:52.436 [2024-10-01 01:47:32.075097] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:52.436 [2024-10-01 01:47:32.075149] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xea8210 0 00:29:52.436 [2024-10-01 01:47:32.081022] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:52.436 [2024-10-01 01:47:32.081044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:52.436 [2024-10-01 01:47:32.081053] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:52.436 [2024-10-01 01:47:32.081059] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:52.436 [2024-10-01 01:47:32.081093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.081106] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.081117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.081134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:52.436 [2024-10-01 01:47:32.081162] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.436 [2024-10-01 01:47:32.089014] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.436 [2024-10-01 01:47:32.089032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.436 [2024-10-01 01:47:32.089040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.436 [2024-10-01 01:47:32.089066] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:52.436 [2024-10-01 01:47:32.089077] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:52.436 [2024-10-01 01:47:32.089086] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:52.436 [2024-10-01 01:47:32.089106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089115] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.089132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.436 [2024-10-01 01:47:32.089183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.436 [2024-10-01 01:47:32.089339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.436 [2024-10-01 01:47:32.089354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.436 [2024-10-01 01:47:32.089361] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.436 [2024-10-01 01:47:32.089377] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:52.436 [2024-10-01 01:47:32.089391] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:52.436 [2024-10-01 01:47:32.089403] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089417] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.089428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.436 [2024-10-01 01:47:32.089449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.436 [2024-10-01 01:47:32.089547] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.436 [2024-10-01 01:47:32.089560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.436 [2024-10-01 01:47:32.089566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089573] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.436 [2024-10-01 01:47:32.089582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:52.436 [2024-10-01 01:47:32.089595] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:52.436 [2024-10-01 01:47:32.089608] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.089637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.436 [2024-10-01 01:47:32.089658] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.436 [2024-10-01 01:47:32.089764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.436 [2024-10-01 01:47:32.089779] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.436 [2024-10-01 01:47:32.089786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.436 [2024-10-01 01:47:32.089802] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:52.436 [2024-10-01 01:47:32.089818] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089827] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089834] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.089844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.436 [2024-10-01 01:47:32.089865] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.436 [2024-10-01 01:47:32.089966] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.436 [2024-10-01 01:47:32.089981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.436 [2024-10-01 01:47:32.089988] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.089994] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.436 [2024-10-01 01:47:32.090015] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:52.436 [2024-10-01 01:47:32.090025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:52.436 [2024-10-01 01:47:32.090038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:52.436 [2024-10-01 01:47:32.090157] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:52.436 [2024-10-01 01:47:32.090166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:52.436 [2024-10-01 01:47:32.090179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.090187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.090193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.090203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.436 [2024-10-01 01:47:32.090225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.436 [2024-10-01 01:47:32.090366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.436 [2024-10-01 01:47:32.090378] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.436 [2024-10-01 01:47:32.090385] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.090391] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.436 [2024-10-01 01:47:32.090400] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:52.436 [2024-10-01 01:47:32.090416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.090434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.436 [2024-10-01 01:47:32.090441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.436 [2024-10-01 01:47:32.090452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.436 [2024-10-01 01:47:32.090473] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.437 [2024-10-01 01:47:32.090580] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.437 [2024-10-01 01:47:32.090595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.437 [2024-10-01 01:47:32.090602] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.090608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.437 [2024-10-01 01:47:32.090616] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:52.437 [2024-10-01 01:47:32.090624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:52.437 [2024-10-01 01:47:32.090637] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:52.437 [2024-10-01 01:47:32.090652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:52.437 [2024-10-01 01:47:32.090667] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.090674] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.090685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.437 [2024-10-01 01:47:32.090707] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.437 [2024-10-01 01:47:32.090859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.437 [2024-10-01 01:47:32.090874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.437 [2024-10-01 01:47:32.090881] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.090888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8210): datao=0, datal=4096, cccid=0 00:29:52.437 [2024-10-01 01:47:32.090896] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf12440) on tqpair(0xea8210): expected_datao=0, payload_size=4096 00:29:52.437 [2024-10-01 01:47:32.090904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.090922] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.090932] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131175] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.437 [2024-10-01 01:47:32.131194] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.437 [2024-10-01 01:47:32.131202] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131210] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.437 [2024-10-01 01:47:32.131222] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:52.437 [2024-10-01 01:47:32.131232] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:52.437 [2024-10-01 01:47:32.131239] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:52.437 [2024-10-01 01:47:32.131247] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:52.437 [2024-10-01 01:47:32.131255] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:52.437 [2024-10-01 01:47:32.131269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:52.437 [2024-10-01 01:47:32.131284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:52.437 [2024-10-01 01:47:32.131296] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131304] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131311] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.131323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:52.437 [2024-10-01 01:47:32.131347] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.437 [2024-10-01 01:47:32.131456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.437 [2024-10-01 01:47:32.131468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.437 [2024-10-01 01:47:32.131475] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.437 [2024-10-01 01:47:32.131494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131501] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131508] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.131518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.437 [2024-10-01 01:47:32.131528] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.131549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.437 [2024-10-01 01:47:32.131559] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131566] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.131581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.437 [2024-10-01 01:47:32.131591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.131613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.437 [2024-10-01 01:47:32.131639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:52.437 [2024-10-01 01:47:32.131659] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:52.437 [2024-10-01 01:47:32.131672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.131704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.437 [2024-10-01 01:47:32.131731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12440, cid 0, qid 0 00:29:52.437 [2024-10-01 01:47:32.131742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf125c0, cid 1, qid 0 00:29:52.437 [2024-10-01 01:47:32.131750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12740, cid 2, qid 0 00:29:52.437 [2024-10-01 01:47:32.131772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.437 [2024-10-01 01:47:32.131780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12a40, cid 4, qid 0 00:29:52.437 [2024-10-01 01:47:32.131963] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.437 [2024-10-01 01:47:32.131978] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.437 [2024-10-01 01:47:32.131985] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.131992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12a40) on tqpair=0xea8210 00:29:52.437 [2024-10-01 01:47:32.132009] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:52.437 [2024-10-01 01:47:32.132019] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:52.437 [2024-10-01 01:47:32.132036] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132046] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.132057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.437 [2024-10-01 01:47:32.132094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12a40, cid 4, qid 0 00:29:52.437 [2024-10-01 01:47:32.132277] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.437 [2024-10-01 01:47:32.132292] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.437 [2024-10-01 01:47:32.132299] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132306] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8210): datao=0, datal=4096, cccid=4 00:29:52.437 [2024-10-01 01:47:32.132314] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf12a40) on tqpair(0xea8210): expected_datao=0, payload_size=4096 00:29:52.437 [2024-10-01 01:47:32.132321] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132338] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132347] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132413] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.437 [2024-10-01 01:47:32.132427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.437 [2024-10-01 01:47:32.132434] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12a40) on tqpair=0xea8210 00:29:52.437 [2024-10-01 01:47:32.132459] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:52.437 [2024-10-01 01:47:32.132501] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132512] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.132523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.437 [2024-10-01 01:47:32.132535] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132542] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.437 [2024-10-01 01:47:32.132548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xea8210) 00:29:52.437 [2024-10-01 01:47:32.132558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.438 [2024-10-01 01:47:32.132584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12a40, cid 4, qid 0 00:29:52.438 [2024-10-01 01:47:32.132595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12bc0, cid 5, qid 0 00:29:52.438 [2024-10-01 01:47:32.132754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.438 [2024-10-01 01:47:32.132766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.438 [2024-10-01 01:47:32.132773] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.132780] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8210): datao=0, datal=1024, cccid=4 00:29:52.438 [2024-10-01 01:47:32.132788] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf12a40) on tqpair(0xea8210): expected_datao=0, payload_size=1024 00:29:52.438 [2024-10-01 01:47:32.132795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.132805] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.132812] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.132821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.438 [2024-10-01 01:47:32.132830] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.438 [2024-10-01 01:47:32.132836] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.132843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12bc0) on tqpair=0xea8210 00:29:52.438 [2024-10-01 01:47:32.176014] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.438 [2024-10-01 01:47:32.176032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.438 [2024-10-01 01:47:32.176040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.176047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12a40) on tqpair=0xea8210 00:29:52.438 [2024-10-01 01:47:32.176063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.176072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8210) 00:29:52.438 [2024-10-01 01:47:32.176083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.438 [2024-10-01 01:47:32.176113] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12a40, cid 4, qid 0 00:29:52.438 [2024-10-01 01:47:32.176286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.438 [2024-10-01 01:47:32.176302] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.438 [2024-10-01 01:47:32.176309] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.176315] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8210): datao=0, datal=3072, cccid=4 00:29:52.438 [2024-10-01 01:47:32.176323] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf12a40) on tqpair(0xea8210): expected_datao=0, payload_size=3072 00:29:52.438 [2024-10-01 01:47:32.176331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.176351] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.176360] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.218151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.438 [2024-10-01 01:47:32.218171] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.438 [2024-10-01 01:47:32.218178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.218186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12a40) on tqpair=0xea8210 00:29:52.438 [2024-10-01 01:47:32.218201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.218210] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xea8210) 00:29:52.438 [2024-10-01 01:47:32.218221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.438 [2024-10-01 01:47:32.218256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf12a40, cid 4, qid 0 00:29:52.438 [2024-10-01 01:47:32.218392] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.438 [2024-10-01 01:47:32.218405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.438 [2024-10-01 01:47:32.218412] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.218418] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xea8210): datao=0, datal=8, cccid=4 00:29:52.438 [2024-10-01 01:47:32.218426] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf12a40) on tqpair(0xea8210): expected_datao=0, payload_size=8 00:29:52.438 [2024-10-01 01:47:32.218433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.218443] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.218450] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.264012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.438 [2024-10-01 01:47:32.264030] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.438 [2024-10-01 01:47:32.264037] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.438 [2024-10-01 01:47:32.264059] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12a40) on tqpair=0xea8210 00:29:52.438 ===================================================== 00:29:52.438 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:52.438 ===================================================== 00:29:52.438 Controller Capabilities/Features 00:29:52.438 ================================ 00:29:52.438 Vendor ID: 0000 00:29:52.438 Subsystem Vendor ID: 0000 00:29:52.438 Serial Number: .................... 00:29:52.438 Model Number: ........................................ 00:29:52.438 Firmware Version: 25.01 00:29:52.438 Recommended Arb Burst: 0 00:29:52.438 IEEE OUI Identifier: 00 00 00 00:29:52.438 Multi-path I/O 00:29:52.438 May have multiple subsystem ports: No 00:29:52.438 May have multiple controllers: No 00:29:52.438 Associated with SR-IOV VF: No 00:29:52.438 Max Data Transfer Size: 131072 00:29:52.438 Max Number of Namespaces: 0 00:29:52.438 Max Number of I/O Queues: 1024 00:29:52.438 NVMe Specification Version (VS): 1.3 00:29:52.438 NVMe Specification Version (Identify): 1.3 00:29:52.438 Maximum Queue Entries: 128 00:29:52.438 Contiguous Queues Required: Yes 00:29:52.438 Arbitration Mechanisms Supported 00:29:52.438 Weighted Round Robin: Not Supported 00:29:52.438 Vendor Specific: Not Supported 00:29:52.438 Reset Timeout: 15000 ms 00:29:52.438 Doorbell Stride: 4 bytes 00:29:52.438 NVM Subsystem Reset: Not Supported 00:29:52.438 Command Sets Supported 00:29:52.438 NVM Command Set: Supported 00:29:52.438 Boot Partition: Not Supported 00:29:52.438 Memory Page Size Minimum: 4096 bytes 00:29:52.438 Memory Page Size Maximum: 4096 bytes 00:29:52.438 Persistent Memory Region: Not Supported 00:29:52.438 Optional Asynchronous Events Supported 00:29:52.438 Namespace Attribute Notices: Not Supported 00:29:52.438 Firmware Activation Notices: Not Supported 00:29:52.438 ANA Change Notices: Not Supported 00:29:52.438 PLE Aggregate Log Change Notices: Not Supported 00:29:52.438 LBA Status Info Alert Notices: Not Supported 00:29:52.438 EGE Aggregate Log Change Notices: Not Supported 00:29:52.438 Normal NVM Subsystem Shutdown event: Not Supported 00:29:52.438 Zone Descriptor Change Notices: Not Supported 00:29:52.438 Discovery Log Change Notices: Supported 00:29:52.438 Controller Attributes 00:29:52.438 128-bit Host Identifier: Not Supported 00:29:52.438 Non-Operational Permissive Mode: Not Supported 00:29:52.438 NVM Sets: Not Supported 00:29:52.438 Read Recovery Levels: Not Supported 00:29:52.438 Endurance Groups: Not Supported 00:29:52.438 Predictable Latency Mode: Not Supported 00:29:52.438 Traffic Based Keep ALive: Not Supported 00:29:52.438 Namespace Granularity: Not Supported 00:29:52.438 SQ Associations: Not Supported 00:29:52.438 UUID List: Not Supported 00:29:52.438 Multi-Domain Subsystem: Not Supported 00:29:52.438 Fixed Capacity Management: Not Supported 00:29:52.438 Variable Capacity Management: Not Supported 00:29:52.438 Delete Endurance Group: Not Supported 00:29:52.438 Delete NVM Set: Not Supported 00:29:52.438 Extended LBA Formats Supported: Not Supported 00:29:52.438 Flexible Data Placement Supported: Not Supported 00:29:52.438 00:29:52.438 Controller Memory Buffer Support 00:29:52.438 ================================ 00:29:52.438 Supported: No 00:29:52.438 00:29:52.438 Persistent Memory Region Support 00:29:52.438 ================================ 00:29:52.438 Supported: No 00:29:52.438 00:29:52.438 Admin Command Set Attributes 00:29:52.438 ============================ 00:29:52.438 Security Send/Receive: Not Supported 00:29:52.438 Format NVM: Not Supported 00:29:52.438 Firmware Activate/Download: Not Supported 00:29:52.438 Namespace Management: Not Supported 00:29:52.438 Device Self-Test: Not Supported 00:29:52.438 Directives: Not Supported 00:29:52.438 NVMe-MI: Not Supported 00:29:52.438 Virtualization Management: Not Supported 00:29:52.438 Doorbell Buffer Config: Not Supported 00:29:52.438 Get LBA Status Capability: Not Supported 00:29:52.438 Command & Feature Lockdown Capability: Not Supported 00:29:52.438 Abort Command Limit: 1 00:29:52.438 Async Event Request Limit: 4 00:29:52.438 Number of Firmware Slots: N/A 00:29:52.438 Firmware Slot 1 Read-Only: N/A 00:29:52.438 Firmware Activation Without Reset: N/A 00:29:52.438 Multiple Update Detection Support: N/A 00:29:52.438 Firmware Update Granularity: No Information Provided 00:29:52.438 Per-Namespace SMART Log: No 00:29:52.438 Asymmetric Namespace Access Log Page: Not Supported 00:29:52.438 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:52.438 Command Effects Log Page: Not Supported 00:29:52.438 Get Log Page Extended Data: Supported 00:29:52.438 Telemetry Log Pages: Not Supported 00:29:52.438 Persistent Event Log Pages: Not Supported 00:29:52.439 Supported Log Pages Log Page: May Support 00:29:52.439 Commands Supported & Effects Log Page: Not Supported 00:29:52.439 Feature Identifiers & Effects Log Page:May Support 00:29:52.439 NVMe-MI Commands & Effects Log Page: May Support 00:29:52.439 Data Area 4 for Telemetry Log: Not Supported 00:29:52.439 Error Log Page Entries Supported: 128 00:29:52.439 Keep Alive: Not Supported 00:29:52.439 00:29:52.439 NVM Command Set Attributes 00:29:52.439 ========================== 00:29:52.439 Submission Queue Entry Size 00:29:52.439 Max: 1 00:29:52.439 Min: 1 00:29:52.439 Completion Queue Entry Size 00:29:52.439 Max: 1 00:29:52.439 Min: 1 00:29:52.439 Number of Namespaces: 0 00:29:52.439 Compare Command: Not Supported 00:29:52.439 Write Uncorrectable Command: Not Supported 00:29:52.439 Dataset Management Command: Not Supported 00:29:52.439 Write Zeroes Command: Not Supported 00:29:52.439 Set Features Save Field: Not Supported 00:29:52.439 Reservations: Not Supported 00:29:52.439 Timestamp: Not Supported 00:29:52.439 Copy: Not Supported 00:29:52.439 Volatile Write Cache: Not Present 00:29:52.439 Atomic Write Unit (Normal): 1 00:29:52.439 Atomic Write Unit (PFail): 1 00:29:52.439 Atomic Compare & Write Unit: 1 00:29:52.439 Fused Compare & Write: Supported 00:29:52.439 Scatter-Gather List 00:29:52.439 SGL Command Set: Supported 00:29:52.439 SGL Keyed: Supported 00:29:52.439 SGL Bit Bucket Descriptor: Not Supported 00:29:52.439 SGL Metadata Pointer: Not Supported 00:29:52.439 Oversized SGL: Not Supported 00:29:52.439 SGL Metadata Address: Not Supported 00:29:52.439 SGL Offset: Supported 00:29:52.439 Transport SGL Data Block: Not Supported 00:29:52.439 Replay Protected Memory Block: Not Supported 00:29:52.439 00:29:52.439 Firmware Slot Information 00:29:52.439 ========================= 00:29:52.439 Active slot: 0 00:29:52.439 00:29:52.439 00:29:52.439 Error Log 00:29:52.439 ========= 00:29:52.439 00:29:52.439 Active Namespaces 00:29:52.439 ================= 00:29:52.439 Discovery Log Page 00:29:52.439 ================== 00:29:52.439 Generation Counter: 2 00:29:52.439 Number of Records: 2 00:29:52.439 Record Format: 0 00:29:52.439 00:29:52.439 Discovery Log Entry 0 00:29:52.439 ---------------------- 00:29:52.439 Transport Type: 3 (TCP) 00:29:52.439 Address Family: 1 (IPv4) 00:29:52.439 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:52.439 Entry Flags: 00:29:52.439 Duplicate Returned Information: 1 00:29:52.439 Explicit Persistent Connection Support for Discovery: 1 00:29:52.439 Transport Requirements: 00:29:52.439 Secure Channel: Not Required 00:29:52.439 Port ID: 0 (0x0000) 00:29:52.439 Controller ID: 65535 (0xffff) 00:29:52.439 Admin Max SQ Size: 128 00:29:52.439 Transport Service Identifier: 4420 00:29:52.439 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:52.439 Transport Address: 10.0.0.2 00:29:52.439 Discovery Log Entry 1 00:29:52.439 ---------------------- 00:29:52.439 Transport Type: 3 (TCP) 00:29:52.439 Address Family: 1 (IPv4) 00:29:52.439 Subsystem Type: 2 (NVM Subsystem) 00:29:52.439 Entry Flags: 00:29:52.439 Duplicate Returned Information: 0 00:29:52.439 Explicit Persistent Connection Support for Discovery: 0 00:29:52.439 Transport Requirements: 00:29:52.439 Secure Channel: Not Required 00:29:52.439 Port ID: 0 (0x0000) 00:29:52.439 Controller ID: 65535 (0xffff) 00:29:52.439 Admin Max SQ Size: 128 00:29:52.439 Transport Service Identifier: 4420 00:29:52.439 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:52.439 Transport Address: 10.0.0.2 [2024-10-01 01:47:32.264171] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:52.439 [2024-10-01 01:47:32.264193] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12440) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.439 [2024-10-01 01:47:32.264215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf125c0) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.439 [2024-10-01 01:47:32.264230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf12740) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.439 [2024-10-01 01:47:32.264246] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.439 [2024-10-01 01:47:32.264267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264275] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.439 [2024-10-01 01:47:32.264293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.439 [2024-10-01 01:47:32.264317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.439 [2024-10-01 01:47:32.264421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.439 [2024-10-01 01:47:32.264436] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.439 [2024-10-01 01:47:32.264444] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.439 [2024-10-01 01:47:32.264487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.439 [2024-10-01 01:47:32.264519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.439 [2024-10-01 01:47:32.264643] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.439 [2024-10-01 01:47:32.264655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.439 [2024-10-01 01:47:32.264661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264677] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:52.439 [2024-10-01 01:47:32.264690] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:52.439 [2024-10-01 01:47:32.264707] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.439 [2024-10-01 01:47:32.264733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.439 [2024-10-01 01:47:32.264754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.439 [2024-10-01 01:47:32.264903] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.439 [2024-10-01 01:47:32.264915] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.439 [2024-10-01 01:47:32.264922] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.264944] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.264960] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.439 [2024-10-01 01:47:32.264970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.439 [2024-10-01 01:47:32.264991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.439 [2024-10-01 01:47:32.265101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.439 [2024-10-01 01:47:32.265116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.439 [2024-10-01 01:47:32.265123] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.265130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.439 [2024-10-01 01:47:32.265146] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.265155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.265162] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.439 [2024-10-01 01:47:32.265173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.439 [2024-10-01 01:47:32.265194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.439 [2024-10-01 01:47:32.265306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.439 [2024-10-01 01:47:32.265319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.439 [2024-10-01 01:47:32.265325] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.439 [2024-10-01 01:47:32.265332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.265348] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265357] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.265378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.265399] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.265547] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.265559] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.265566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265573] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.265588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265597] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265603] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.265614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.265635] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.265752] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.265767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.265774] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265781] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.265797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265806] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265812] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.265823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.265844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.265948] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.265963] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.265970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.265976] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.265993] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266014] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266022] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.266032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.266054] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.266204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.266219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.266226] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.266249] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.266282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.266304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.266408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.266423] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.266430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.266453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266468] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.266479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.266500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.266607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.266618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.266625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.266648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.266674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.266694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.266798] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.266813] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.266820] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266826] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.266842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266851] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.266858] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.266868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.266889] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.267020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.267036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.267043] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.267050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.267066] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.267075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.267081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.267092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.267117] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.267217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.440 [2024-10-01 01:47:32.267229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.440 [2024-10-01 01:47:32.267236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.267243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.440 [2024-10-01 01:47:32.267259] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.267268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.440 [2024-10-01 01:47:32.267274] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.440 [2024-10-01 01:47:32.267285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.440 [2024-10-01 01:47:32.267305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.440 [2024-10-01 01:47:32.267457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.441 [2024-10-01 01:47:32.267469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.441 [2024-10-01 01:47:32.267477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.441 [2024-10-01 01:47:32.267499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267508] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267514] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.441 [2024-10-01 01:47:32.267525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.441 [2024-10-01 01:47:32.267545] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.441 [2024-10-01 01:47:32.267645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.441 [2024-10-01 01:47:32.267657] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.441 [2024-10-01 01:47:32.267664] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267670] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.441 [2024-10-01 01:47:32.267686] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.441 [2024-10-01 01:47:32.267712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.441 [2024-10-01 01:47:32.267732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.441 [2024-10-01 01:47:32.267848] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.441 [2024-10-01 01:47:32.267860] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.441 [2024-10-01 01:47:32.267867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.441 [2024-10-01 01:47:32.267889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267898] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.267904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.441 [2024-10-01 01:47:32.267915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.441 [2024-10-01 01:47:32.267936] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.441 [2024-10-01 01:47:32.272023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.441 [2024-10-01 01:47:32.272040] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.441 [2024-10-01 01:47:32.272047] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.272054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.441 [2024-10-01 01:47:32.272071] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.272079] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.272086] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xea8210) 00:29:52.441 [2024-10-01 01:47:32.272096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.441 [2024-10-01 01:47:32.272118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf128c0, cid 3, qid 0 00:29:52.441 [2024-10-01 01:47:32.272274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.441 [2024-10-01 01:47:32.272289] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.441 [2024-10-01 01:47:32.272296] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.441 [2024-10-01 01:47:32.272303] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf128c0) on tqpair=0xea8210 00:29:52.441 [2024-10-01 01:47:32.272316] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:52.441 00:29:52.702 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:52.702 [2024-10-01 01:47:32.305520] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:52.702 [2024-10-01 01:47:32.305562] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999920 ] 00:29:52.702 [2024-10-01 01:47:32.337750] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:52.702 [2024-10-01 01:47:32.337802] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:52.702 [2024-10-01 01:47:32.337812] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:52.702 [2024-10-01 01:47:32.337826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:52.702 [2024-10-01 01:47:32.337839] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:52.702 [2024-10-01 01:47:32.338298] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:52.702 [2024-10-01 01:47:32.338356] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x74b210 0 00:29:52.702 [2024-10-01 01:47:32.349027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:52.702 [2024-10-01 01:47:32.349047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:52.702 [2024-10-01 01:47:32.349055] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:52.702 [2024-10-01 01:47:32.349061] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:52.702 [2024-10-01 01:47:32.349089] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.702 [2024-10-01 01:47:32.349115] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.702 [2024-10-01 01:47:32.349122] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.702 [2024-10-01 01:47:32.349140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:52.702 [2024-10-01 01:47:32.349169] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.702 [2024-10-01 01:47:32.357010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.702 [2024-10-01 01:47:32.357028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.702 [2024-10-01 01:47:32.357035] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.702 [2024-10-01 01:47:32.357058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.702 [2024-10-01 01:47:32.357072] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:52.702 [2024-10-01 01:47:32.357083] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:52.702 [2024-10-01 01:47:32.357092] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:52.702 [2024-10-01 01:47:32.357111] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.702 [2024-10-01 01:47:32.357120] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.702 [2024-10-01 01:47:32.357126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.702 [2024-10-01 01:47:32.357138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.702 [2024-10-01 01:47:32.357163] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.702 [2024-10-01 01:47:32.357307] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.702 [2024-10-01 01:47:32.357323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.702 [2024-10-01 01:47:32.357330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.357345] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:52.703 [2024-10-01 01:47:32.357358] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:52.703 [2024-10-01 01:47:32.357371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357379] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.357396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.703 [2024-10-01 01:47:32.357418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.357521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.357536] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.357543] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.357558] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:52.703 [2024-10-01 01:47:32.357572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:52.703 [2024-10-01 01:47:32.357584] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.357609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.703 [2024-10-01 01:47:32.357635] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.357736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.357749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.357756] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357763] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.357771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:52.703 [2024-10-01 01:47:32.357787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.357814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.703 [2024-10-01 01:47:32.357835] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.357934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.357949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.357956] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.357963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.357970] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:52.703 [2024-10-01 01:47:32.357979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:52.703 [2024-10-01 01:47:32.357993] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:52.703 [2024-10-01 01:47:32.358114] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:52.703 [2024-10-01 01:47:32.358123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:52.703 [2024-10-01 01:47:32.358135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.358159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.703 [2024-10-01 01:47:32.358182] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.358311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.358323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.358330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.358345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:52.703 [2024-10-01 01:47:32.358362] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358377] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.358387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.703 [2024-10-01 01:47:32.358413] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.358507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.358520] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.358527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358533] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.358541] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:52.703 [2024-10-01 01:47:32.358549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:52.703 [2024-10-01 01:47:32.358562] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:52.703 [2024-10-01 01:47:32.358576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:52.703 [2024-10-01 01:47:32.358590] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358597] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.358608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.703 [2024-10-01 01:47:32.358630] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.358770] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.703 [2024-10-01 01:47:32.358782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.703 [2024-10-01 01:47:32.358789] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358796] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=4096, cccid=0 00:29:52.703 [2024-10-01 01:47:32.358804] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5440) on tqpair(0x74b210): expected_datao=0, payload_size=4096 00:29:52.703 [2024-10-01 01:47:32.358811] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358821] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358829] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358840] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.358850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.358856] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358863] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.358873] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:52.703 [2024-10-01 01:47:32.358881] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:52.703 [2024-10-01 01:47:32.358889] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:52.703 [2024-10-01 01:47:32.358896] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:52.703 [2024-10-01 01:47:32.358903] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:52.703 [2024-10-01 01:47:32.358911] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:52.703 [2024-10-01 01:47:32.358925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:52.703 [2024-10-01 01:47:32.358940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.358955] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.358966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:52.703 [2024-10-01 01:47:32.358988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.703 [2024-10-01 01:47:32.359100] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.703 [2024-10-01 01:47:32.359116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.703 [2024-10-01 01:47:32.359123] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.359130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.703 [2024-10-01 01:47:32.359140] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.359147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.703 [2024-10-01 01:47:32.359153] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74b210) 00:29:52.703 [2024-10-01 01:47:32.359164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.704 [2024-10-01 01:47:32.359174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359181] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.359196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.704 [2024-10-01 01:47:32.359205] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359212] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359218] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.359227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.704 [2024-10-01 01:47:32.359237] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359250] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.359258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.704 [2024-10-01 01:47:32.359267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.359286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.359314] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.359331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.704 [2024-10-01 01:47:32.359354] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5440, cid 0, qid 0 00:29:52.704 [2024-10-01 01:47:32.359381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b55c0, cid 1, qid 0 00:29:52.704 [2024-10-01 01:47:32.359389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5740, cid 2, qid 0 00:29:52.704 [2024-10-01 01:47:32.359397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.704 [2024-10-01 01:47:32.359408] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.704 [2024-10-01 01:47:32.359559] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.704 [2024-10-01 01:47:32.359572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.704 [2024-10-01 01:47:32.359579] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359585] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.704 [2024-10-01 01:47:32.359593] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:52.704 [2024-10-01 01:47:32.359602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.359616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.359630] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.359642] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359656] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.359667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:52.704 [2024-10-01 01:47:32.359703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.704 [2024-10-01 01:47:32.359875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.704 [2024-10-01 01:47:32.359888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.704 [2024-10-01 01:47:32.359895] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.359902] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.704 [2024-10-01 01:47:32.359972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.359992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.360016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360024] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.360035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.704 [2024-10-01 01:47:32.360058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.704 [2024-10-01 01:47:32.360207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.704 [2024-10-01 01:47:32.360222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.704 [2024-10-01 01:47:32.360229] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=4096, cccid=4 00:29:52.704 [2024-10-01 01:47:32.360243] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5a40) on tqpair(0x74b210): expected_datao=0, payload_size=4096 00:29:52.704 [2024-10-01 01:47:32.360250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360268] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360277] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360322] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.704 [2024-10-01 01:47:32.360334] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.704 [2024-10-01 01:47:32.360345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.704 [2024-10-01 01:47:32.360367] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:52.704 [2024-10-01 01:47:32.360387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.360404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.360418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360426] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.360436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.704 [2024-10-01 01:47:32.360459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.704 [2024-10-01 01:47:32.360595] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.704 [2024-10-01 01:47:32.360611] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.704 [2024-10-01 01:47:32.360618] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360624] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=4096, cccid=4 00:29:52.704 [2024-10-01 01:47:32.360632] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5a40) on tqpair(0x74b210): expected_datao=0, payload_size=4096 00:29:52.704 [2024-10-01 01:47:32.360640] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360649] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360657] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.704 [2024-10-01 01:47:32.360678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.704 [2024-10-01 01:47:32.360685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360691] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.704 [2024-10-01 01:47:32.360711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.360729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.360743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.704 [2024-10-01 01:47:32.360761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.704 [2024-10-01 01:47:32.360784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.704 [2024-10-01 01:47:32.360892] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.704 [2024-10-01 01:47:32.360907] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.704 [2024-10-01 01:47:32.360914] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360920] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=4096, cccid=4 00:29:52.704 [2024-10-01 01:47:32.360927] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5a40) on tqpair(0x74b210): expected_datao=0, payload_size=4096 00:29:52.704 [2024-10-01 01:47:32.360935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360956] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.360966] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.365024] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.704 [2024-10-01 01:47:32.365041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.704 [2024-10-01 01:47:32.365048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.704 [2024-10-01 01:47:32.365054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.704 [2024-10-01 01:47:32.365067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.365082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:52.704 [2024-10-01 01:47:32.365112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:52.705 [2024-10-01 01:47:32.365124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:52.705 [2024-10-01 01:47:32.365133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:52.705 [2024-10-01 01:47:32.365141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:52.705 [2024-10-01 01:47:32.365149] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:52.705 [2024-10-01 01:47:32.365157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:52.705 [2024-10-01 01:47:32.365165] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:52.705 [2024-10-01 01:47:32.365184] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.365203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.365215] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365222] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.365237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.705 [2024-10-01 01:47:32.365260] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.705 [2024-10-01 01:47:32.365272] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5bc0, cid 5, qid 0 00:29:52.705 [2024-10-01 01:47:32.365419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.705 [2024-10-01 01:47:32.365431] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.705 [2024-10-01 01:47:32.365438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365444] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.705 [2024-10-01 01:47:32.365454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.705 [2024-10-01 01:47:32.365463] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.705 [2024-10-01 01:47:32.365470] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5bc0) on tqpair=0x74b210 00:29:52.705 [2024-10-01 01:47:32.365491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365500] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.365514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.365536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5bc0, cid 5, qid 0 00:29:52.705 [2024-10-01 01:47:32.365640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.705 [2024-10-01 01:47:32.365655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.705 [2024-10-01 01:47:32.365662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5bc0) on tqpair=0x74b210 00:29:52.705 [2024-10-01 01:47:32.365685] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365693] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.365704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.365725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5bc0, cid 5, qid 0 00:29:52.705 [2024-10-01 01:47:32.365822] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.705 [2024-10-01 01:47:32.365834] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.705 [2024-10-01 01:47:32.365841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5bc0) on tqpair=0x74b210 00:29:52.705 [2024-10-01 01:47:32.365863] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.365871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.365882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.365902] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5bc0, cid 5, qid 0 00:29:52.705 [2024-10-01 01:47:32.366006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.705 [2024-10-01 01:47:32.366020] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.705 [2024-10-01 01:47:32.366026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5bc0) on tqpair=0x74b210 00:29:52.705 [2024-10-01 01:47:32.366057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366068] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.366078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.366090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.366107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.366118] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.366135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.366150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74b210) 00:29:52.705 [2024-10-01 01:47:32.366170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.705 [2024-10-01 01:47:32.366194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5bc0, cid 5, qid 0 00:29:52.705 [2024-10-01 01:47:32.366205] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5a40, cid 4, qid 0 00:29:52.705 [2024-10-01 01:47:32.366213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5d40, cid 6, qid 0 00:29:52.705 [2024-10-01 01:47:32.366220] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5ec0, cid 7, qid 0 00:29:52.705 [2024-10-01 01:47:32.366439] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.705 [2024-10-01 01:47:32.366452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.705 [2024-10-01 01:47:32.366459] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366465] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=8192, cccid=5 00:29:52.705 [2024-10-01 01:47:32.366473] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5bc0) on tqpair(0x74b210): expected_datao=0, payload_size=8192 00:29:52.705 [2024-10-01 01:47:32.366480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366500] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366510] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366518] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.705 [2024-10-01 01:47:32.366527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.705 [2024-10-01 01:47:32.366534] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366540] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=512, cccid=4 00:29:52.705 [2024-10-01 01:47:32.366547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5a40) on tqpair(0x74b210): expected_datao=0, payload_size=512 00:29:52.705 [2024-10-01 01:47:32.366555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366564] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366570] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366579] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.705 [2024-10-01 01:47:32.366588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.705 [2024-10-01 01:47:32.366594] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366600] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=512, cccid=6 00:29:52.705 [2024-10-01 01:47:32.366607] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5d40) on tqpair(0x74b210): expected_datao=0, payload_size=512 00:29:52.705 [2024-10-01 01:47:32.366615] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.705 [2024-10-01 01:47:32.366624] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366630] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366639] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.706 [2024-10-01 01:47:32.366647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.706 [2024-10-01 01:47:32.366654] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74b210): datao=0, datal=4096, cccid=7 00:29:52.706 [2024-10-01 01:47:32.366667] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b5ec0) on tqpair(0x74b210): expected_datao=0, payload_size=4096 00:29:52.706 [2024-10-01 01:47:32.366675] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366684] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366691] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.706 [2024-10-01 01:47:32.366716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.706 [2024-10-01 01:47:32.366722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5bc0) on tqpair=0x74b210 00:29:52.706 [2024-10-01 01:47:32.366762] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.706 [2024-10-01 01:47:32.366774] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.706 [2024-10-01 01:47:32.366781] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366787] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5a40) on tqpair=0x74b210 00:29:52.706 [2024-10-01 01:47:32.366802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.706 [2024-10-01 01:47:32.366826] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.706 [2024-10-01 01:47:32.366833] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366839] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5d40) on tqpair=0x74b210 00:29:52.706 [2024-10-01 01:47:32.366849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.706 [2024-10-01 01:47:32.366858] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.706 [2024-10-01 01:47:32.366864] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.706 [2024-10-01 01:47:32.366870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5ec0) on tqpair=0x74b210 00:29:52.706 ===================================================== 00:29:52.706 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.706 ===================================================== 00:29:52.706 Controller Capabilities/Features 00:29:52.706 ================================ 00:29:52.706 Vendor ID: 8086 00:29:52.706 Subsystem Vendor ID: 8086 00:29:52.706 Serial Number: SPDK00000000000001 00:29:52.706 Model Number: SPDK bdev Controller 00:29:52.706 Firmware Version: 25.01 00:29:52.706 Recommended Arb Burst: 6 00:29:52.706 IEEE OUI Identifier: e4 d2 5c 00:29:52.706 Multi-path I/O 00:29:52.706 May have multiple subsystem ports: Yes 00:29:52.706 May have multiple controllers: Yes 00:29:52.706 Associated with SR-IOV VF: No 00:29:52.706 Max Data Transfer Size: 131072 00:29:52.706 Max Number of Namespaces: 32 00:29:52.706 Max Number of I/O Queues: 127 00:29:52.706 NVMe Specification Version (VS): 1.3 00:29:52.706 NVMe Specification Version (Identify): 1.3 00:29:52.706 Maximum Queue Entries: 128 00:29:52.706 Contiguous Queues Required: Yes 00:29:52.706 Arbitration Mechanisms Supported 00:29:52.706 Weighted Round Robin: Not Supported 00:29:52.706 Vendor Specific: Not Supported 00:29:52.706 Reset Timeout: 15000 ms 00:29:52.706 Doorbell Stride: 4 bytes 00:29:52.706 NVM Subsystem Reset: Not Supported 00:29:52.706 Command Sets Supported 00:29:52.706 NVM Command Set: Supported 00:29:52.706 Boot Partition: Not Supported 00:29:52.706 Memory Page Size Minimum: 4096 bytes 00:29:52.706 Memory Page Size Maximum: 4096 bytes 00:29:52.706 Persistent Memory Region: Not Supported 00:29:52.706 Optional Asynchronous Events Supported 00:29:52.706 Namespace Attribute Notices: Supported 00:29:52.706 Firmware Activation Notices: Not Supported 00:29:52.706 ANA Change Notices: Not Supported 00:29:52.706 PLE Aggregate Log Change Notices: Not Supported 00:29:52.706 LBA Status Info Alert Notices: Not Supported 00:29:52.706 EGE Aggregate Log Change Notices: Not Supported 00:29:52.706 Normal NVM Subsystem Shutdown event: Not Supported 00:29:52.706 Zone Descriptor Change Notices: Not Supported 00:29:52.706 Discovery Log Change Notices: Not Supported 00:29:52.706 Controller Attributes 00:29:52.706 128-bit Host Identifier: Supported 00:29:52.706 Non-Operational Permissive Mode: Not Supported 00:29:52.706 NVM Sets: Not Supported 00:29:52.706 Read Recovery Levels: Not Supported 00:29:52.706 Endurance Groups: Not Supported 00:29:52.706 Predictable Latency Mode: Not Supported 00:29:52.706 Traffic Based Keep ALive: Not Supported 00:29:52.706 Namespace Granularity: Not Supported 00:29:52.706 SQ Associations: Not Supported 00:29:52.706 UUID List: Not Supported 00:29:52.706 Multi-Domain Subsystem: Not Supported 00:29:52.706 Fixed Capacity Management: Not Supported 00:29:52.706 Variable Capacity Management: Not Supported 00:29:52.706 Delete Endurance Group: Not Supported 00:29:52.706 Delete NVM Set: Not Supported 00:29:52.706 Extended LBA Formats Supported: Not Supported 00:29:52.706 Flexible Data Placement Supported: Not Supported 00:29:52.706 00:29:52.706 Controller Memory Buffer Support 00:29:52.706 ================================ 00:29:52.706 Supported: No 00:29:52.706 00:29:52.706 Persistent Memory Region Support 00:29:52.706 ================================ 00:29:52.706 Supported: No 00:29:52.706 00:29:52.706 Admin Command Set Attributes 00:29:52.706 ============================ 00:29:52.706 Security Send/Receive: Not Supported 00:29:52.706 Format NVM: Not Supported 00:29:52.706 Firmware Activate/Download: Not Supported 00:29:52.706 Namespace Management: Not Supported 00:29:52.706 Device Self-Test: Not Supported 00:29:52.706 Directives: Not Supported 00:29:52.706 NVMe-MI: Not Supported 00:29:52.706 Virtualization Management: Not Supported 00:29:52.706 Doorbell Buffer Config: Not Supported 00:29:52.706 Get LBA Status Capability: Not Supported 00:29:52.706 Command & Feature Lockdown Capability: Not Supported 00:29:52.706 Abort Command Limit: 4 00:29:52.706 Async Event Request Limit: 4 00:29:52.706 Number of Firmware Slots: N/A 00:29:52.706 Firmware Slot 1 Read-Only: N/A 00:29:52.706 Firmware Activation Without Reset: N/A 00:29:52.706 Multiple Update Detection Support: N/A 00:29:52.706 Firmware Update Granularity: No Information Provided 00:29:52.706 Per-Namespace SMART Log: No 00:29:52.706 Asymmetric Namespace Access Log Page: Not Supported 00:29:52.706 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:52.706 Command Effects Log Page: Supported 00:29:52.706 Get Log Page Extended Data: Supported 00:29:52.706 Telemetry Log Pages: Not Supported 00:29:52.706 Persistent Event Log Pages: Not Supported 00:29:52.706 Supported Log Pages Log Page: May Support 00:29:52.706 Commands Supported & Effects Log Page: Not Supported 00:29:52.706 Feature Identifiers & Effects Log Page:May Support 00:29:52.706 NVMe-MI Commands & Effects Log Page: May Support 00:29:52.706 Data Area 4 for Telemetry Log: Not Supported 00:29:52.706 Error Log Page Entries Supported: 128 00:29:52.706 Keep Alive: Supported 00:29:52.706 Keep Alive Granularity: 10000 ms 00:29:52.706 00:29:52.706 NVM Command Set Attributes 00:29:52.706 ========================== 00:29:52.706 Submission Queue Entry Size 00:29:52.706 Max: 64 00:29:52.706 Min: 64 00:29:52.706 Completion Queue Entry Size 00:29:52.706 Max: 16 00:29:52.706 Min: 16 00:29:52.706 Number of Namespaces: 32 00:29:52.706 Compare Command: Supported 00:29:52.706 Write Uncorrectable Command: Not Supported 00:29:52.707 Dataset Management Command: Supported 00:29:52.707 Write Zeroes Command: Supported 00:29:52.707 Set Features Save Field: Not Supported 00:29:52.707 Reservations: Supported 00:29:52.707 Timestamp: Not Supported 00:29:52.707 Copy: Supported 00:29:52.707 Volatile Write Cache: Present 00:29:52.707 Atomic Write Unit (Normal): 1 00:29:52.707 Atomic Write Unit (PFail): 1 00:29:52.707 Atomic Compare & Write Unit: 1 00:29:52.707 Fused Compare & Write: Supported 00:29:52.707 Scatter-Gather List 00:29:52.707 SGL Command Set: Supported 00:29:52.707 SGL Keyed: Supported 00:29:52.707 SGL Bit Bucket Descriptor: Not Supported 00:29:52.707 SGL Metadata Pointer: Not Supported 00:29:52.707 Oversized SGL: Not Supported 00:29:52.707 SGL Metadata Address: Not Supported 00:29:52.707 SGL Offset: Supported 00:29:52.707 Transport SGL Data Block: Not Supported 00:29:52.707 Replay Protected Memory Block: Not Supported 00:29:52.707 00:29:52.707 Firmware Slot Information 00:29:52.707 ========================= 00:29:52.707 Active slot: 1 00:29:52.707 Slot 1 Firmware Revision: 25.01 00:29:52.707 00:29:52.707 00:29:52.707 Commands Supported and Effects 00:29:52.707 ============================== 00:29:52.707 Admin Commands 00:29:52.707 -------------- 00:29:52.707 Get Log Page (02h): Supported 00:29:52.707 Identify (06h): Supported 00:29:52.707 Abort (08h): Supported 00:29:52.707 Set Features (09h): Supported 00:29:52.707 Get Features (0Ah): Supported 00:29:52.707 Asynchronous Event Request (0Ch): Supported 00:29:52.707 Keep Alive (18h): Supported 00:29:52.707 I/O Commands 00:29:52.707 ------------ 00:29:52.707 Flush (00h): Supported LBA-Change 00:29:52.707 Write (01h): Supported LBA-Change 00:29:52.707 Read (02h): Supported 00:29:52.707 Compare (05h): Supported 00:29:52.707 Write Zeroes (08h): Supported LBA-Change 00:29:52.707 Dataset Management (09h): Supported LBA-Change 00:29:52.707 Copy (19h): Supported LBA-Change 00:29:52.707 00:29:52.707 Error Log 00:29:52.707 ========= 00:29:52.707 00:29:52.707 Arbitration 00:29:52.707 =========== 00:29:52.707 Arbitration Burst: 1 00:29:52.707 00:29:52.707 Power Management 00:29:52.707 ================ 00:29:52.707 Number of Power States: 1 00:29:52.707 Current Power State: Power State #0 00:29:52.707 Power State #0: 00:29:52.707 Max Power: 0.00 W 00:29:52.707 Non-Operational State: Operational 00:29:52.707 Entry Latency: Not Reported 00:29:52.707 Exit Latency: Not Reported 00:29:52.707 Relative Read Throughput: 0 00:29:52.707 Relative Read Latency: 0 00:29:52.707 Relative Write Throughput: 0 00:29:52.707 Relative Write Latency: 0 00:29:52.707 Idle Power: Not Reported 00:29:52.707 Active Power: Not Reported 00:29:52.707 Non-Operational Permissive Mode: Not Supported 00:29:52.707 00:29:52.707 Health Information 00:29:52.707 ================== 00:29:52.707 Critical Warnings: 00:29:52.707 Available Spare Space: OK 00:29:52.707 Temperature: OK 00:29:52.707 Device Reliability: OK 00:29:52.707 Read Only: No 00:29:52.707 Volatile Memory Backup: OK 00:29:52.707 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:52.707 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:52.707 Available Spare: 0% 00:29:52.707 Available Spare Threshold: 0% 00:29:52.707 Life Percentage Used:[2024-10-01 01:47:32.366995] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74b210) 00:29:52.707 [2024-10-01 01:47:32.367041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-10-01 01:47:32.367065] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b5ec0, cid 7, qid 0 00:29:52.707 [2024-10-01 01:47:32.367211] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.707 [2024-10-01 01:47:32.367224] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.707 [2024-10-01 01:47:32.367230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5ec0) on tqpair=0x74b210 00:29:52.707 [2024-10-01 01:47:32.367281] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:52.707 [2024-10-01 01:47:32.367300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5440) on tqpair=0x74b210 00:29:52.707 [2024-10-01 01:47:32.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.707 [2024-10-01 01:47:32.367320] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b55c0) on tqpair=0x74b210 00:29:52.707 [2024-10-01 01:47:32.367327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.707 [2024-10-01 01:47:32.367335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b5740) on tqpair=0x74b210 00:29:52.707 [2024-10-01 01:47:32.367343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.707 [2024-10-01 01:47:32.367350] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.707 [2024-10-01 01:47:32.367358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.707 [2024-10-01 01:47:32.367384] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367392] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367398] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.707 [2024-10-01 01:47:32.367412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.707 [2024-10-01 01:47:32.367435] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.707 [2024-10-01 01:47:32.367571] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.707 [2024-10-01 01:47:32.367584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.707 [2024-10-01 01:47:32.367591] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367598] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.707 [2024-10-01 01:47:32.367609] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367617] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.707 [2024-10-01 01:47:32.367623] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.707 [2024-10-01 01:47:32.367634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.367659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.367782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.367797] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.367803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.367810] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.367818] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:52.708 [2024-10-01 01:47:32.367825] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:52.708 [2024-10-01 01:47:32.367842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.367850] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.367857] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.367868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.367888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.367983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.367995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.368011] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368018] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.368035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368051] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.368061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.368083] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.368184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.368199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.368206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.368229] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.368260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.368281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.368379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.368394] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.368400] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368407] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.368424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368433] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.368450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.368471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.368569] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.368584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.368591] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368598] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.368614] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368623] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.368640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.368661] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.368757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.368772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.368779] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368785] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.368802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.368828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.368849] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.368944] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.368959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.368965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.368972] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.368988] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.373005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.373021] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74b210) 00:29:52.708 [2024-10-01 01:47:32.373033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.708 [2024-10-01 01:47:32.373056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b58c0, cid 3, qid 0 00:29:52.708 [2024-10-01 01:47:32.373200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.708 [2024-10-01 01:47:32.373213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.708 [2024-10-01 01:47:32.373220] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.708 [2024-10-01 01:47:32.373226] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b58c0) on tqpair=0x74b210 00:29:52.708 [2024-10-01 01:47:32.373239] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:52.708 0% 00:29:52.708 Data Units Read: 0 00:29:52.708 Data Units Written: 0 00:29:52.708 Host Read Commands: 0 00:29:52.708 Host Write Commands: 0 00:29:52.708 Controller Busy Time: 0 minutes 00:29:52.708 Power Cycles: 0 00:29:52.708 Power On Hours: 0 hours 00:29:52.708 Unsafe Shutdowns: 0 00:29:52.708 Unrecoverable Media Errors: 0 00:29:52.708 Lifetime Error Log Entries: 0 00:29:52.708 Warning Temperature Time: 0 minutes 00:29:52.708 Critical Temperature Time: 0 minutes 00:29:52.708 00:29:52.708 Number of Queues 00:29:52.708 ================ 00:29:52.708 Number of I/O Submission Queues: 127 00:29:52.708 Number of I/O Completion Queues: 127 00:29:52.708 00:29:52.708 Active Namespaces 00:29:52.708 ================= 00:29:52.708 Namespace ID:1 00:29:52.708 Error Recovery Timeout: Unlimited 00:29:52.708 Command Set Identifier: NVM (00h) 00:29:52.708 Deallocate: Supported 00:29:52.708 Deallocated/Unwritten Error: Not Supported 00:29:52.708 Deallocated Read Value: Unknown 00:29:52.708 Deallocate in Write Zeroes: Not Supported 00:29:52.708 Deallocated Guard Field: 0xFFFF 00:29:52.708 Flush: Supported 00:29:52.708 Reservation: Supported 00:29:52.708 Namespace Sharing Capabilities: Multiple Controllers 00:29:52.708 Size (in LBAs): 131072 (0GiB) 00:29:52.708 Capacity (in LBAs): 131072 (0GiB) 00:29:52.708 Utilization (in LBAs): 131072 (0GiB) 00:29:52.708 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:52.708 EUI64: ABCDEF0123456789 00:29:52.708 UUID: da034375-f8f1-44ad-8a9e-1ec9df285957 00:29:52.708 Thin Provisioning: Not Supported 00:29:52.709 Per-NS Atomic Units: Yes 00:29:52.709 Atomic Boundary Size (Normal): 0 00:29:52.709 Atomic Boundary Size (PFail): 0 00:29:52.709 Atomic Boundary Offset: 0 00:29:52.709 Maximum Single Source Range Length: 65535 00:29:52.709 Maximum Copy Length: 65535 00:29:52.709 Maximum Source Range Count: 1 00:29:52.709 NGUID/EUI64 Never Reused: No 00:29:52.709 Namespace Write Protected: No 00:29:52.709 Number of LBA Formats: 1 00:29:52.709 Current LBA Format: LBA Format #00 00:29:52.709 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:52.709 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.709 rmmod nvme_tcp 00:29:52.709 rmmod nvme_fabrics 00:29:52.709 rmmod nvme_keyring 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 999795 ']' 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 999795 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 999795 ']' 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 999795 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 999795 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 999795' 00:29:52.709 killing process with pid 999795 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 999795 00:29:52.709 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 999795 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.969 01:47:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.937 01:47:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.937 00:29:54.937 real 0m5.455s 00:29:54.937 user 0m4.472s 00:29:54.937 sys 0m1.823s 00:29:54.937 01:47:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:54.937 01:47:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.937 ************************************ 00:29:54.937 END TEST nvmf_identify 00:29:54.937 ************************************ 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.198 ************************************ 00:29:55.198 START TEST nvmf_perf 00:29:55.198 ************************************ 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:55.198 * Looking for test storage... 00:29:55.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:55.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.198 --rc genhtml_branch_coverage=1 00:29:55.198 --rc genhtml_function_coverage=1 00:29:55.198 --rc genhtml_legend=1 00:29:55.198 --rc geninfo_all_blocks=1 00:29:55.198 --rc geninfo_unexecuted_blocks=1 00:29:55.198 00:29:55.198 ' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:55.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.198 --rc genhtml_branch_coverage=1 00:29:55.198 --rc genhtml_function_coverage=1 00:29:55.198 --rc genhtml_legend=1 00:29:55.198 --rc geninfo_all_blocks=1 00:29:55.198 --rc geninfo_unexecuted_blocks=1 00:29:55.198 00:29:55.198 ' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:55.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.198 --rc genhtml_branch_coverage=1 00:29:55.198 --rc genhtml_function_coverage=1 00:29:55.198 --rc genhtml_legend=1 00:29:55.198 --rc geninfo_all_blocks=1 00:29:55.198 --rc geninfo_unexecuted_blocks=1 00:29:55.198 00:29:55.198 ' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:55.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.198 --rc genhtml_branch_coverage=1 00:29:55.198 --rc genhtml_function_coverage=1 00:29:55.198 --rc genhtml_legend=1 00:29:55.198 --rc geninfo_all_blocks=1 00:29:55.198 --rc geninfo_unexecuted_blocks=1 00:29:55.198 00:29:55.198 ' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.198 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:55.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.199 01:47:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.199 01:47:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:55.199 01:47:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:55.199 01:47:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.199 01:47:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:57.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:57.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:57.732 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:57.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:57.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:29:57.733 00:29:57.733 --- 10.0.0.2 ping statistics --- 00:29:57.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.733 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:29:57.733 00:29:57.733 --- 10.0.0.1 ping statistics --- 00:29:57.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.733 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=1001872 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 1001872 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1001872 ']' 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.733 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.733 [2024-10-01 01:47:37.297080] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:29:57.733 [2024-10-01 01:47:37.297159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.733 [2024-10-01 01:47:37.371769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.733 [2024-10-01 01:47:37.463167] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.733 [2024-10-01 01:47:37.463237] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.733 [2024-10-01 01:47:37.463254] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.733 [2024-10-01 01:47:37.463268] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.733 [2024-10-01 01:47:37.463287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.733 [2024-10-01 01:47:37.463352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.733 [2024-10-01 01:47:37.463420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.733 [2024-10-01 01:47:37.463542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.733 [2024-10-01 01:47:37.463545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:57.991 01:47:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:01.271 01:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:01.271 01:47:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:01.271 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:01.271 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:01.528 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:01.528 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:01.528 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:01.528 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:01.528 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:01.786 [2024-10-01 01:47:41.545356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.786 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.042 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:02.042 01:47:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:02.606 01:47:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:02.606 01:47:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:02.864 01:47:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.122 [2024-10-01 01:47:42.785899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.122 01:47:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.379 01:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:03.379 01:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:03.379 01:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:03.379 01:47:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:04.749 Initializing NVMe Controllers 00:30:04.749 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:04.749 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:04.749 Initialization complete. Launching workers. 00:30:04.749 ======================================================== 00:30:04.749 Latency(us) 00:30:04.749 Device Information : IOPS MiB/s Average min max 00:30:04.749 PCIE (0000:88:00.0) NSID 1 from core 0: 85136.91 332.57 375.22 39.40 7474.79 00:30:04.749 ======================================================== 00:30:04.749 Total : 85136.91 332.57 375.22 39.40 7474.79 00:30:04.749 00:30:04.749 01:47:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.120 Initializing NVMe Controllers 00:30:06.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:06.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:06.120 Initialization complete. Launching workers. 00:30:06.120 ======================================================== 00:30:06.120 Latency(us) 00:30:06.120 Device Information : IOPS MiB/s Average min max 00:30:06.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 144.92 0.57 7079.64 163.89 45803.44 00:30:06.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.97 0.26 15158.83 4027.46 47907.56 00:30:06.120 ======================================================== 00:30:06.120 Total : 210.89 0.82 9606.78 163.89 47907.56 00:30:06.120 00:30:06.120 01:47:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.492 Initializing NVMe Controllers 00:30:07.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:07.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:07.492 Initialization complete. Launching workers. 00:30:07.492 ======================================================== 00:30:07.492 Latency(us) 00:30:07.492 Device Information : IOPS MiB/s Average min max 00:30:07.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8282.94 32.36 3864.35 637.00 10518.17 00:30:07.492 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3870.43 15.12 8282.58 6830.25 16435.19 00:30:07.492 ======================================================== 00:30:07.492 Total : 12153.36 47.47 5271.41 637.00 16435.19 00:30:07.492 00:30:07.492 01:47:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:07.492 01:47:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:07.492 01:47:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.018 Initializing NVMe Controllers 00:30:10.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.018 Controller IO queue size 128, less than required. 00:30:10.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.018 Controller IO queue size 128, less than required. 00:30:10.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.018 Initialization complete. Launching workers. 00:30:10.018 ======================================================== 00:30:10.019 Latency(us) 00:30:10.019 Device Information : IOPS MiB/s Average min max 00:30:10.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1384.50 346.12 94123.39 54185.11 142256.81 00:30:10.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.46 143.37 229603.09 86505.17 337589.92 00:30:10.019 ======================================================== 00:30:10.019 Total : 1957.96 489.49 133803.78 54185.11 337589.92 00:30:10.019 00:30:10.019 01:47:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:10.019 No valid NVMe controllers or AIO or URING devices found 00:30:10.019 Initializing NVMe Controllers 00:30:10.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.019 Controller IO queue size 128, less than required. 00:30:10.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.019 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:10.019 Controller IO queue size 128, less than required. 00:30:10.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.019 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:10.019 WARNING: Some requested NVMe devices were skipped 00:30:10.019 01:47:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:13.302 Initializing NVMe Controllers 00:30:13.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:13.302 Controller IO queue size 128, less than required. 00:30:13.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:13.302 Controller IO queue size 128, less than required. 00:30:13.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:13.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:13.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:13.302 Initialization complete. Launching workers. 00:30:13.302 00:30:13.302 ==================== 00:30:13.302 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:13.302 TCP transport: 00:30:13.302 polls: 11001 00:30:13.302 idle_polls: 6263 00:30:13.302 sock_completions: 4738 00:30:13.302 nvme_completions: 5257 00:30:13.302 submitted_requests: 7888 00:30:13.302 queued_requests: 1 00:30:13.302 00:30:13.302 ==================== 00:30:13.302 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:13.302 TCP transport: 00:30:13.302 polls: 13601 00:30:13.302 idle_polls: 8573 00:30:13.302 sock_completions: 5028 00:30:13.302 nvme_completions: 6217 00:30:13.302 submitted_requests: 9450 00:30:13.302 queued_requests: 1 00:30:13.302 ======================================================== 00:30:13.302 Latency(us) 00:30:13.302 Device Information : IOPS MiB/s Average min max 00:30:13.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1311.29 327.82 99304.57 57442.57 140292.51 00:30:13.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1550.79 387.70 84023.97 40295.84 134221.02 00:30:13.302 ======================================================== 00:30:13.302 Total : 2862.08 715.52 91024.91 40295.84 140292.51 00:30:13.302 00:30:13.302 01:47:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:13.302 01:47:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.302 01:47:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:13.302 01:47:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:13.302 01:47:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c62edc32-a51d-4002-8bf3-c7da8ce6acac 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c62edc32-a51d-4002-8bf3-c7da8ce6acac 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c62edc32-a51d-4002-8bf3-c7da8ce6acac 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:16.581 { 00:30:16.581 "uuid": "c62edc32-a51d-4002-8bf3-c7da8ce6acac", 00:30:16.581 "name": "lvs_0", 00:30:16.581 "base_bdev": "Nvme0n1", 00:30:16.581 "total_data_clusters": 238234, 00:30:16.581 "free_clusters": 238234, 00:30:16.581 "block_size": 512, 00:30:16.581 "cluster_size": 4194304 00:30:16.581 } 00:30:16.581 ]' 00:30:16.581 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c62edc32-a51d-4002-8bf3-c7da8ce6acac") .free_clusters' 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c62edc32-a51d-4002-8bf3-c7da8ce6acac") .cluster_size' 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:16.839 952936 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:16.839 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c62edc32-a51d-4002-8bf3-c7da8ce6acac lbd_0 20480 00:30:17.096 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=758b04d2-d44c-402c-a97d-d41cab223575 00:30:17.096 01:47:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 758b04d2-d44c-402c-a97d-d41cab223575 lvs_n_0 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=41b209fd-93ea-4ce3-a737-d458f02f9405 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 41b209fd-93ea-4ce3-a737-d458f02f9405 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=41b209fd-93ea-4ce3-a737-d458f02f9405 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:18.029 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.287 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:18.287 { 00:30:18.287 "uuid": "c62edc32-a51d-4002-8bf3-c7da8ce6acac", 00:30:18.287 "name": "lvs_0", 00:30:18.287 "base_bdev": "Nvme0n1", 00:30:18.287 "total_data_clusters": 238234, 00:30:18.287 "free_clusters": 233114, 00:30:18.287 "block_size": 512, 00:30:18.287 "cluster_size": 4194304 00:30:18.287 }, 00:30:18.287 { 00:30:18.287 "uuid": "41b209fd-93ea-4ce3-a737-d458f02f9405", 00:30:18.287 "name": "lvs_n_0", 00:30:18.287 "base_bdev": "758b04d2-d44c-402c-a97d-d41cab223575", 00:30:18.287 "total_data_clusters": 5114, 00:30:18.287 "free_clusters": 5114, 00:30:18.287 "block_size": 512, 00:30:18.287 "cluster_size": 4194304 00:30:18.287 } 00:30:18.287 ]' 00:30:18.287 01:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="41b209fd-93ea-4ce3-a737-d458f02f9405") .free_clusters' 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="41b209fd-93ea-4ce3-a737-d458f02f9405") .cluster_size' 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:18.287 20456 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:18.287 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 41b209fd-93ea-4ce3-a737-d458f02f9405 lbd_nest_0 20456 00:30:18.545 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a86560dd-f858-4e6b-9a78-399feb6fd574 00:30:18.545 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.802 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:18.802 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a86560dd-f858-4e6b-9a78-399feb6fd574 00:30:19.060 01:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.317 01:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:19.317 01:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:19.317 01:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:19.317 01:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:19.317 01:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.507 Initializing NVMe Controllers 00:30:31.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.507 Initialization complete. Launching workers. 00:30:31.507 ======================================================== 00:30:31.507 Latency(us) 00:30:31.507 Device Information : IOPS MiB/s Average min max 00:30:31.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.99 0.02 20464.00 193.41 48934.11 00:30:31.507 ======================================================== 00:30:31.507 Total : 48.99 0.02 20464.00 193.41 48934.11 00:30:31.507 00:30:31.507 01:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:31.507 01:48:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.470 Initializing NVMe Controllers 00:30:41.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.470 Initialization complete. Launching workers. 00:30:41.470 ======================================================== 00:30:41.470 Latency(us) 00:30:41.470 Device Information : IOPS MiB/s Average min max 00:30:41.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.10 9.39 13324.94 5066.88 50861.06 00:30:41.470 ======================================================== 00:30:41.470 Total : 75.10 9.39 13324.94 5066.88 50861.06 00:30:41.470 00:30:41.470 01:48:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:41.470 01:48:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:41.470 01:48:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.433 Initializing NVMe Controllers 00:30:51.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.433 Initialization complete. Launching workers. 00:30:51.433 ======================================================== 00:30:51.433 Latency(us) 00:30:51.433 Device Information : IOPS MiB/s Average min max 00:30:51.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7460.63 3.64 4288.82 284.66 12210.85 00:30:51.433 ======================================================== 00:30:51.433 Total : 7460.63 3.64 4288.82 284.66 12210.85 00:30:51.433 00:30:51.433 01:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:51.433 01:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.396 Initializing NVMe Controllers 00:31:01.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.396 Initialization complete. Launching workers. 00:31:01.396 ======================================================== 00:31:01.396 Latency(us) 00:31:01.396 Device Information : IOPS MiB/s Average min max 00:31:01.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3170.86 396.36 10097.10 849.81 22998.49 00:31:01.396 ======================================================== 00:31:01.396 Total : 3170.86 396.36 10097.10 849.81 22998.49 00:31:01.396 00:31:01.396 01:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:01.396 01:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:01.396 01:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.418 Initializing NVMe Controllers 00:31:11.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.418 Controller IO queue size 128, less than required. 00:31:11.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.418 Initialization complete. Launching workers. 00:31:11.418 ======================================================== 00:31:11.418 Latency(us) 00:31:11.418 Device Information : IOPS MiB/s Average min max 00:31:11.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11697.46 5.71 10947.51 1856.24 30803.26 00:31:11.418 ======================================================== 00:31:11.418 Total : 11697.46 5.71 10947.51 1856.24 30803.26 00:31:11.418 00:31:11.418 01:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:11.418 01:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:21.382 Initializing NVMe Controllers 00:31:21.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.382 Controller IO queue size 128, less than required. 00:31:21.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:21.382 Initialization complete. Launching workers. 00:31:21.382 ======================================================== 00:31:21.382 Latency(us) 00:31:21.382 Device Information : IOPS MiB/s Average min max 00:31:21.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1190.38 148.80 108088.19 15943.47 234944.89 00:31:21.382 ======================================================== 00:31:21.382 Total : 1190.38 148.80 108088.19 15943.47 234944.89 00:31:21.382 00:31:21.382 01:49:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.948 01:49:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a86560dd-f858-4e6b-9a78-399feb6fd574 00:31:22.514 01:49:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:22.772 01:49:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 758b04d2-d44c-402c-a97d-d41cab223575 00:31:23.338 01:49:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.338 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.338 rmmod nvme_tcp 00:31:23.338 rmmod nvme_fabrics 00:31:23.597 rmmod nvme_keyring 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 1001872 ']' 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 1001872 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1001872 ']' 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1001872 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001872 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001872' 00:31:23.597 killing process with pid 1001872 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1001872 00:31:23.597 01:49:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1001872 00:31:25.499 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:25.499 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:25.499 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:25.499 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:25.499 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:31:25.499 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:25.500 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:31:25.500 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.500 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.500 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.500 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.500 01:49:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.405 00:31:27.405 real 1m32.059s 00:31:27.405 user 5m41.054s 00:31:27.405 sys 0m15.289s 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:27.405 ************************************ 00:31:27.405 END TEST nvmf_perf 00:31:27.405 ************************************ 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.405 ************************************ 00:31:27.405 START TEST nvmf_fio_host 00:31:27.405 ************************************ 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:27.405 * Looking for test storage... 00:31:27.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:31:27.405 01:49:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.405 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:27.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.406 --rc genhtml_branch_coverage=1 00:31:27.406 --rc genhtml_function_coverage=1 00:31:27.406 --rc genhtml_legend=1 00:31:27.406 --rc geninfo_all_blocks=1 00:31:27.406 --rc geninfo_unexecuted_blocks=1 00:31:27.406 00:31:27.406 ' 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:27.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.406 --rc genhtml_branch_coverage=1 00:31:27.406 --rc genhtml_function_coverage=1 00:31:27.406 --rc genhtml_legend=1 00:31:27.406 --rc geninfo_all_blocks=1 00:31:27.406 --rc geninfo_unexecuted_blocks=1 00:31:27.406 00:31:27.406 ' 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:27.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.406 --rc genhtml_branch_coverage=1 00:31:27.406 --rc genhtml_function_coverage=1 00:31:27.406 --rc genhtml_legend=1 00:31:27.406 --rc geninfo_all_blocks=1 00:31:27.406 --rc geninfo_unexecuted_blocks=1 00:31:27.406 00:31:27.406 ' 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:27.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.406 --rc genhtml_branch_coverage=1 00:31:27.406 --rc genhtml_function_coverage=1 00:31:27.406 --rc genhtml_legend=1 00:31:27.406 --rc geninfo_all_blocks=1 00:31:27.406 --rc geninfo_unexecuted_blocks=1 00:31:27.406 00:31:27.406 ' 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:27.406 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:27.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.407 01:49:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:29.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:29.309 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:29.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:29.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:29.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:29.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:31:29.310 00:31:29.310 --- 10.0.0.2 ping statistics --- 00:31:29.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.310 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:31:29.310 00:31:29.310 --- 10.0.0.1 ping statistics --- 00:31:29.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.310 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:29.310 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1014622 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.568 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1014622 00:31:29.569 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1014622 ']' 00:31:29.569 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.569 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:29.569 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.569 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:29.569 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.569 [2024-10-01 01:49:09.220292] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:31:29.569 [2024-10-01 01:49:09.220395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.569 [2024-10-01 01:49:09.289016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.569 [2024-10-01 01:49:09.383497] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.569 [2024-10-01 01:49:09.383573] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.569 [2024-10-01 01:49:09.383589] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.569 [2024-10-01 01:49:09.383603] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.569 [2024-10-01 01:49:09.383614] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.569 [2024-10-01 01:49:09.383714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.569 [2024-10-01 01:49:09.383782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.569 [2024-10-01 01:49:09.383878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.569 [2024-10-01 01:49:09.383880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.826 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:29.827 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:29.827 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:30.084 [2024-10-01 01:49:09.763358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.084 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:30.084 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.084 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.084 01:49:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:30.342 Malloc1 00:31:30.342 01:49:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:30.907 01:49:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:30.907 01:49:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.164 [2024-10-01 01:49:11.008162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.422 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:31.680 01:49:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:31.680 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:31.680 fio-3.35 00:31:31.680 Starting 1 thread 00:31:34.209 00:31:34.209 test: (groupid=0, jobs=1): err= 0: pid=1015051: Tue Oct 1 01:49:14 2024 00:31:34.209 read: IOPS=8839, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec) 00:31:34.209 slat (usec): min=2, max=159, avg= 2.62, stdev= 1.91 00:31:34.209 clat (usec): min=2220, max=13956, avg=7936.35, stdev=632.46 00:31:34.209 lat (usec): min=2244, max=13959, avg=7938.97, stdev=632.37 00:31:34.209 clat percentiles (usec): 00:31:34.209 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:31:34.209 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:31:34.209 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:31:34.209 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11994], 99.95th=[12387], 00:31:34.209 | 99.99th=[13960] 00:31:34.209 bw ( KiB/s): min=33952, max=36224, per=99.97%, avg=35346.00, stdev=974.21, samples=4 00:31:34.209 iops : min= 8488, max= 9056, avg=8836.50, stdev=243.55, samples=4 00:31:34.209 write: IOPS=8855, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2007msec); 0 zone resets 00:31:34.209 slat (nsec): min=2124, max=92567, avg=2699.01, stdev=1404.73 00:31:34.209 clat (usec): min=1596, max=12200, avg=6433.29, stdev=538.65 00:31:34.209 lat (usec): min=1604, max=12202, avg=6435.99, stdev=538.58 00:31:34.209 clat percentiles (usec): 00:31:34.209 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:31:34.209 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:31:34.209 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:31:34.209 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[10552], 99.95th=[11600], 00:31:34.209 | 99.99th=[12125] 00:31:34.209 bw ( KiB/s): min=34912, max=35776, per=100.00%, avg=35424.00, stdev=402.23, samples=4 00:31:34.209 iops : min= 8728, max= 8944, avg=8856.00, stdev=100.56, samples=4 00:31:34.209 lat (msec) : 2=0.02%, 4=0.11%, 10=99.69%, 20=0.18% 00:31:34.209 cpu : usr=63.36%, sys=32.90%, ctx=101, majf=0, minf=37 00:31:34.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:34.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:34.209 issued rwts: total=17740,17772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:34.209 00:31:34.209 Run status group 0 (all jobs): 00:31:34.209 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2007-2007msec 00:31:34.209 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.8MB), run=2007-2007msec 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:34.209 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.467 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.467 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.467 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.467 01:49:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:34.467 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:34.467 fio-3.35 00:31:34.467 Starting 1 thread 00:31:36.995 00:31:36.995 test: (groupid=0, jobs=1): err= 0: pid=1015383: Tue Oct 1 01:49:16 2024 00:31:36.995 read: IOPS=7954, BW=124MiB/s (130MB/s)(249MiB/2007msec) 00:31:36.995 slat (nsec): min=2802, max=90680, avg=3846.82, stdev=1898.93 00:31:36.995 clat (usec): min=2374, max=19399, avg=9347.54, stdev=2290.73 00:31:36.995 lat (usec): min=2378, max=19402, avg=9351.39, stdev=2290.73 00:31:36.995 clat percentiles (usec): 00:31:36.995 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7308], 00:31:36.995 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:31:36.995 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12518], 95.00th=[13566], 00:31:36.995 | 99.00th=[15139], 99.50th=[16057], 99.90th=[16909], 99.95th=[16909], 00:31:36.995 | 99.99th=[18220] 00:31:36.995 bw ( KiB/s): min=56960, max=71520, per=51.27%, avg=65248.00, stdev=7193.15, samples=4 00:31:36.995 iops : min= 3560, max= 4470, avg=4078.00, stdev=449.57, samples=4 00:31:36.995 write: IOPS=4622, BW=72.2MiB/s (75.7MB/s)(133MiB/1848msec); 0 zone resets 00:31:36.995 slat (usec): min=30, max=194, avg=34.47, stdev= 6.52 00:31:36.995 clat (usec): min=6384, max=23683, avg=11953.67, stdev=2247.36 00:31:36.995 lat (usec): min=6417, max=23714, avg=11988.14, stdev=2247.17 00:31:36.995 clat percentiles (usec): 00:31:36.995 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:31:36.995 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:31:36.995 | 70.00th=[12911], 80.00th=[13698], 90.00th=[15008], 95.00th=[16319], 00:31:36.995 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[23200], 00:31:36.995 | 99.99th=[23725] 00:31:36.995 bw ( KiB/s): min=59616, max=73856, per=91.51%, avg=67688.00, stdev=7169.78, samples=4 00:31:36.995 iops : min= 3726, max= 4616, avg=4230.50, stdev=448.11, samples=4 00:31:36.995 lat (msec) : 4=0.18%, 10=49.88%, 20=49.91%, 50=0.03% 00:31:36.995 cpu : usr=71.73%, sys=25.42%, ctx=42, majf=0, minf=59 00:31:36.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.995 issued rwts: total=15964,8543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.995 00:31:36.995 Run status group 0 (all jobs): 00:31:36.995 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (262MB), run=2007-2007msec 00:31:36.995 WRITE: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=133MiB (140MB), run=1848-1848msec 00:31:36.995 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.252 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:37.252 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:37.252 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:37.252 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:37.252 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:37.253 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:37.253 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:37.253 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:37.253 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:37.253 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:31:37.253 01:49:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:40.530 Nvme0n1 00:31:40.530 01:49:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=60c9603a-da66-4ae9-9c11-e57493e794d8 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 60c9603a-da66-4ae9-9c11-e57493e794d8 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=60c9603a-da66-4ae9-9c11-e57493e794d8 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:43.810 01:49:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:43.810 { 00:31:43.810 "uuid": "60c9603a-da66-4ae9-9c11-e57493e794d8", 00:31:43.810 "name": "lvs_0", 00:31:43.810 "base_bdev": "Nvme0n1", 00:31:43.810 "total_data_clusters": 930, 00:31:43.810 "free_clusters": 930, 00:31:43.810 "block_size": 512, 00:31:43.810 "cluster_size": 1073741824 00:31:43.810 } 00:31:43.810 ]' 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60c9603a-da66-4ae9-9c11-e57493e794d8") .free_clusters' 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60c9603a-da66-4ae9-9c11-e57493e794d8") .cluster_size' 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:43.810 952320 00:31:43.810 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:44.068 659ac0e7-ccf0-49ec-975c-a7e1be6ad373 00:31:44.068 01:49:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:44.326 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:44.584 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:44.842 01:49:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.100 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.100 fio-3.35 00:31:45.100 Starting 1 thread 00:31:47.629 00:31:47.629 test: (groupid=0, jobs=1): err= 0: pid=1016781: Tue Oct 1 01:49:27 2024 00:31:47.629 read: IOPS=5903, BW=23.1MiB/s (24.2MB/s)(46.3MiB/2009msec) 00:31:47.629 slat (nsec): min=1984, max=134631, avg=2619.09, stdev=2052.80 00:31:47.629 clat (usec): min=933, max=171434, avg=11851.57, stdev=11722.93 00:31:47.629 lat (usec): min=936, max=171474, avg=11854.18, stdev=11723.18 00:31:47.629 clat percentiles (msec): 00:31:47.629 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:47.629 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:47.629 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:47.629 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:47.629 | 99.99th=[ 171] 00:31:47.629 bw ( KiB/s): min=16448, max=26056, per=99.94%, avg=23602.00, stdev=4769.66, samples=4 00:31:47.629 iops : min= 4112, max= 6514, avg=5900.50, stdev=1192.41, samples=4 00:31:47.629 write: IOPS=5901, BW=23.1MiB/s (24.2MB/s)(46.3MiB/2009msec); 0 zone resets 00:31:47.629 slat (usec): min=2, max=165, avg= 2.71, stdev= 1.96 00:31:47.629 clat (usec): min=294, max=169715, avg=9627.11, stdev=11010.68 00:31:47.629 lat (usec): min=297, max=169721, avg=9629.82, stdev=11011.03 00:31:47.629 clat percentiles (msec): 00:31:47.629 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:47.629 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:47.629 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:47.629 | 99.00th=[ 11], 99.50th=[ 18], 99.90th=[ 169], 99.95th=[ 169], 00:31:47.629 | 99.99th=[ 169] 00:31:47.629 bw ( KiB/s): min=17384, max=25792, per=99.88%, avg=23580.00, stdev=4132.78, samples=4 00:31:47.629 iops : min= 4346, max= 6448, avg=5895.00, stdev=1033.19, samples=4 00:31:47.629 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:47.629 lat (msec) : 2=0.03%, 4=0.13%, 10=53.48%, 20=45.79%, 250=0.54% 00:31:47.629 cpu : usr=60.16%, sys=36.95%, ctx=79, majf=0, minf=37 00:31:47.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:47.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.629 issued rwts: total=11861,11857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.629 00:31:47.629 Run status group 0 (all jobs): 00:31:47.629 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2009-2009msec 00:31:47.629 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2009-2009msec 00:31:47.629 01:49:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:47.887 01:49:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fef0e855-c7b8-4390-b3a6-4baf492dc7a7 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fef0e855-c7b8-4390-b3a6-4baf492dc7a7 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=fef0e855-c7b8-4390-b3a6-4baf492dc7a7 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:49.265 01:49:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:49.265 { 00:31:49.265 "uuid": "60c9603a-da66-4ae9-9c11-e57493e794d8", 00:31:49.265 "name": "lvs_0", 00:31:49.265 "base_bdev": "Nvme0n1", 00:31:49.265 "total_data_clusters": 930, 00:31:49.265 "free_clusters": 0, 00:31:49.265 "block_size": 512, 00:31:49.265 "cluster_size": 1073741824 00:31:49.265 }, 00:31:49.265 { 00:31:49.265 "uuid": "fef0e855-c7b8-4390-b3a6-4baf492dc7a7", 00:31:49.265 "name": "lvs_n_0", 00:31:49.265 "base_bdev": "659ac0e7-ccf0-49ec-975c-a7e1be6ad373", 00:31:49.265 "total_data_clusters": 237847, 00:31:49.265 "free_clusters": 237847, 00:31:49.265 "block_size": 512, 00:31:49.265 "cluster_size": 4194304 00:31:49.265 } 00:31:49.265 ]' 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fef0e855-c7b8-4390-b3a6-4baf492dc7a7") .free_clusters' 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fef0e855-c7b8-4390-b3a6-4baf492dc7a7") .cluster_size' 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:49.265 951388 00:31:49.265 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:50.203 223f97cb-b489-419c-8d6b-b540ddfd9bdf 00:31:50.203 01:49:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:50.475 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:50.757 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:51.018 01:49:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:51.018 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:51.018 fio-3.35 00:31:51.018 Starting 1 thread 00:31:53.549 00:31:53.549 test: (groupid=0, jobs=1): err= 0: pid=1017520: Tue Oct 1 01:49:33 2024 00:31:53.549 read: IOPS=5720, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2009msec) 00:31:53.549 slat (usec): min=2, max=167, avg= 2.81, stdev= 2.48 00:31:53.549 clat (usec): min=4358, max=20483, avg=12291.08, stdev=1065.46 00:31:53.549 lat (usec): min=4370, max=20485, avg=12293.89, stdev=1065.34 00:31:53.549 clat percentiles (usec): 00:31:53.549 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[10945], 20.00th=[11469], 00:31:53.549 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:31:53.549 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829], 00:31:53.549 | 99.00th=[14615], 99.50th=[14877], 99.90th=[17957], 99.95th=[20317], 00:31:53.549 | 99.99th=[20317] 00:31:53.549 bw ( KiB/s): min=21584, max=23392, per=99.93%, avg=22866.00, stdev=861.39, samples=4 00:31:53.549 iops : min= 5396, max= 5848, avg=5716.50, stdev=215.35, samples=4 00:31:53.549 write: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec); 0 zone resets 00:31:53.549 slat (usec): min=2, max=137, avg= 2.90, stdev= 1.93 00:31:53.549 clat (usec): min=2145, max=20134, avg=9890.94, stdev=947.10 00:31:53.549 lat (usec): min=2155, max=20136, avg=9893.84, stdev=947.05 00:31:53.549 clat percentiles (usec): 00:31:53.549 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:53.549 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:31:53.549 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:31:53.549 | 99.00th=[11863], 99.50th=[12256], 99.90th=[17695], 99.95th=[19006], 00:31:53.549 | 99.99th=[20055] 00:31:53.549 bw ( KiB/s): min=22664, max=23040, per=99.85%, avg=22802.00, stdev=166.07, samples=4 00:31:53.549 iops : min= 5666, max= 5760, avg=5700.50, stdev=41.52, samples=4 00:31:53.549 lat (msec) : 4=0.04%, 10=28.74%, 20=71.18%, 50=0.04% 00:31:53.549 cpu : usr=57.67%, sys=39.59%, ctx=117, majf=0, minf=37 00:31:53.549 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:53.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.549 issued rwts: total=11492,11469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.549 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.549 00:31:53.549 Run status group 0 (all jobs): 00:31:53.549 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2009-2009msec 00:31:53.549 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:31:53.549 01:49:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:53.807 01:49:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:53.807 01:49:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:57.997 01:49:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:57.997 01:49:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:01.289 01:49:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:01.289 01:49:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:03.195 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:03.195 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:03.195 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:03.195 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:03.195 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:03.195 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.196 rmmod nvme_tcp 00:32:03.196 rmmod nvme_fabrics 00:32:03.196 rmmod nvme_keyring 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 1014622 ']' 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 1014622 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1014622 ']' 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1014622 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1014622 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1014622' 00:32:03.196 killing process with pid 1014622 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1014622 00:32:03.196 01:49:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1014622 00:32:03.454 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.455 01:49:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.361 00:32:05.361 real 0m38.207s 00:32:05.361 user 2m27.340s 00:32:05.361 sys 0m7.188s 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.361 ************************************ 00:32:05.361 END TEST nvmf_fio_host 00:32:05.361 ************************************ 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.361 ************************************ 00:32:05.361 START TEST nvmf_failover 00:32:05.361 ************************************ 00:32:05.361 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:05.620 * Looking for test storage... 00:32:05.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:05.620 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.621 --rc genhtml_branch_coverage=1 00:32:05.621 --rc genhtml_function_coverage=1 00:32:05.621 --rc genhtml_legend=1 00:32:05.621 --rc geninfo_all_blocks=1 00:32:05.621 --rc geninfo_unexecuted_blocks=1 00:32:05.621 00:32:05.621 ' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.621 --rc genhtml_branch_coverage=1 00:32:05.621 --rc genhtml_function_coverage=1 00:32:05.621 --rc genhtml_legend=1 00:32:05.621 --rc geninfo_all_blocks=1 00:32:05.621 --rc geninfo_unexecuted_blocks=1 00:32:05.621 00:32:05.621 ' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.621 --rc genhtml_branch_coverage=1 00:32:05.621 --rc genhtml_function_coverage=1 00:32:05.621 --rc genhtml_legend=1 00:32:05.621 --rc geninfo_all_blocks=1 00:32:05.621 --rc geninfo_unexecuted_blocks=1 00:32:05.621 00:32:05.621 ' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:05.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.621 --rc genhtml_branch_coverage=1 00:32:05.621 --rc genhtml_function_coverage=1 00:32:05.621 --rc genhtml_legend=1 00:32:05.621 --rc geninfo_all_blocks=1 00:32:05.621 --rc geninfo_unexecuted_blocks=1 00:32:05.621 00:32:05.621 ' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:05.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:05.621 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:05.622 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.622 01:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:08.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:08.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:08.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:08.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:32:08.155 00:32:08.155 --- 10.0.0.2 ping statistics --- 00:32:08.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.155 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:08.155 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:32:08.155 00:32:08.155 --- 10.0.0.1 ping statistics --- 00:32:08.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.156 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=1020892 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 1020892 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1020892 ']' 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:08.156 [2024-10-01 01:49:47.673046] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:08.156 [2024-10-01 01:49:47.673117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.156 [2024-10-01 01:49:47.744861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:08.156 [2024-10-01 01:49:47.837310] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.156 [2024-10-01 01:49:47.837372] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.156 [2024-10-01 01:49:47.837397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.156 [2024-10-01 01:49:47.837419] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.156 [2024-10-01 01:49:47.837438] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.156 [2024-10-01 01:49:47.837521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.156 [2024-10-01 01:49:47.837641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:08.156 [2024-10-01 01:49:47.837650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.156 01:49:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:08.414 [2024-10-01 01:49:48.214888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.414 01:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:08.672 Malloc0 00:32:08.929 01:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.187 01:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.445 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.703 [2024-10-01 01:49:49.346329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.703 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:09.961 [2024-10-01 01:49:49.615150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:09.961 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:10.219 [2024-10-01 01:49:49.896075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1021180 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1021180 /var/tmp/bdevperf.sock 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1021180 ']' 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:10.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.219 01:49:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:10.478 01:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.478 01:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:10.478 01:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:10.736 NVMe0n1 00:32:10.736 01:49:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:11.302 00:32:11.302 01:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1021312 00:32:11.302 01:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:11.302 01:49:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:12.237 01:49:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.495 [2024-10-01 01:49:52.292667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.495 [2024-10-01 01:49:52.292851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.292972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 [2024-10-01 01:49:52.293201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214b40 is same with the state(6) to be set 00:32:12.496 01:49:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:15.779 01:49:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:16.037 00:32:16.037 01:49:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:16.294 01:49:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:19.573 01:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.573 [2024-10-01 01:49:59.397458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.573 01:49:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:20.946 01:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:20.946 [2024-10-01 01:50:00.683813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.683908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.683932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.683951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.683971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.684016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.684037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 [2024-10-01 01:50:00.684057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2361cc0 is same with the state(6) to be set 00:32:20.946 01:50:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1021312 00:32:27.504 { 00:32:27.504 "results": [ 00:32:27.504 { 00:32:27.504 "job": "NVMe0n1", 00:32:27.504 "core_mask": "0x1", 00:32:27.504 "workload": "verify", 00:32:27.504 "status": "finished", 00:32:27.504 "verify_range": { 00:32:27.504 "start": 0, 00:32:27.504 "length": 16384 00:32:27.504 }, 00:32:27.504 "queue_depth": 128, 00:32:27.504 "io_size": 4096, 00:32:27.504 "runtime": 15.004812, 00:32:27.504 "iops": 8256.551298343491, 00:32:27.504 "mibps": 32.25215350915426, 00:32:27.504 "io_failed": 9284, 00:32:27.504 "io_timeout": 0, 00:32:27.504 "avg_latency_us": 14393.900243116395, 00:32:27.504 "min_latency_us": 433.8725925925926, 00:32:27.504 "max_latency_us": 27573.665185185186 00:32:27.504 } 00:32:27.504 ], 00:32:27.504 "core_count": 1 00:32:27.504 } 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1021180 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1021180 ']' 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1021180 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1021180 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1021180' 00:32:27.504 killing process with pid 1021180 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1021180 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1021180 00:32:27.504 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.504 [2024-10-01 01:49:49.963733] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:27.504 [2024-10-01 01:49:49.963822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021180 ] 00:32:27.504 [2024-10-01 01:49:50.026018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.504 [2024-10-01 01:49:50.120433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.504 Running I/O for 15 seconds... 00:32:27.504 8519.00 IOPS, 33.28 MiB/s [2024-10-01 01:49:52.294407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.504 [2024-10-01 01:49:52.294449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.504 [2024-10-01 01:49:52.294477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.504 [2024-10-01 01:49:52.294492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.504 [2024-10-01 01:49:52.294509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.504 [2024-10-01 01:49:52.294523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.504 [2024-10-01 01:49:52.294538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.504 [2024-10-01 01:49:52.294552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.504 [2024-10-01 01:49:52.294567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.504 [2024-10-01 01:49:52.294580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.294954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.505 [2024-10-01 01:49:52.295686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.505 [2024-10-01 01:49:52.295699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.295976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.295991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.506 [2024-10-01 01:49:52.296847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.506 [2024-10-01 01:49:52.296859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.296874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.296887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.296901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.296914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.296929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.296942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.296970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.296984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.297003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.297054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.297083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.507 [2024-10-01 01:49:52.297112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.297960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.507 [2024-10-01 01:49:52.297988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.507 [2024-10-01 01:49:52.298012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:52.298235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.508 [2024-10-01 01:49:52.298282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.508 [2024-10-01 01:49:52.298294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:32:27.508 [2024-10-01 01:49:52.298322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298385] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22dc560 was disconnected and freed. reset controller. 00:32:27.508 [2024-10-01 01:49:52.298404] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:27.508 [2024-10-01 01:49:52.298436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.508 [2024-10-01 01:49:52.298470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.508 [2024-10-01 01:49:52.298507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.508 [2024-10-01 01:49:52.298554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.508 [2024-10-01 01:49:52.298585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:52.298598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.508 [2024-10-01 01:49:52.301884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.508 [2024-10-01 01:49:52.301922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bbf90 (9): Bad file descriptor 00:32:27.508 [2024-10-01 01:49:52.423950] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:27.508 7942.50 IOPS, 31.03 MiB/s 8090.00 IOPS, 31.60 MiB/s 8177.00 IOPS, 31.94 MiB/s [2024-10-01 01:49:56.108716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.108781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.108810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.108837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.108856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.108871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.108887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.108902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.108917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.108931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.108947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.108961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.108976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.508 [2024-10-01 01:49:56.109486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.508 [2024-10-01 01:49:56.109501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.509 [2024-10-01 01:49:56.109947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.109962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.509 [2024-10-01 01:49:56.110466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.509 [2024-10-01 01:49:56.110481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.110965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.110979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.510 [2024-10-01 01:49:56.111638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.510 [2024-10-01 01:49:56.111651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.111961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.511 [2024-10-01 01:49:56.112207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.511 [2024-10-01 01:49:56.112237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.511 [2024-10-01 01:49:56.112688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de390 is same with the state(6) to be set 00:32:27.511 [2024-10-01 01:49:56.112722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.511 [2024-10-01 01:49:56.112734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.511 [2024-10-01 01:49:56.112745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103384 len:8 PRP1 0x0 PRP2 0x0 00:32:27.511 [2024-10-01 01:49:56.112758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112815] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22de390 was disconnected and freed. reset controller. 00:32:27.511 [2024-10-01 01:49:56.112833] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:27.511 [2024-10-01 01:49:56.112882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.511 [2024-10-01 01:49:56.112901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.511 [2024-10-01 01:49:56.112917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.512 [2024-10-01 01:49:56.112930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:49:56.112944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.512 [2024-10-01 01:49:56.112957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:49:56.112971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.512 [2024-10-01 01:49:56.112984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:49:56.113004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.512 [2024-10-01 01:49:56.116279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.512 [2024-10-01 01:49:56.116333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bbf90 (9): Bad file descriptor 00:32:27.512 8146.80 IOPS, 31.82 MiB/s [2024-10-01 01:49:56.190268] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:27.512 8141.50 IOPS, 31.80 MiB/s 8173.14 IOPS, 31.93 MiB/s 8194.12 IOPS, 32.01 MiB/s 8196.56 IOPS, 32.02 MiB/s [2024-10-01 01:50:00.685778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.512 [2024-10-01 01:50:00.685835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.685864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.685880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.685897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.685911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.685926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.685940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.685965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.685979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.685994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.512 [2024-10-01 01:50:00.686830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.512 [2024-10-01 01:50:00.686843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.686857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.686870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.686898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.686912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.686925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.686940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.686953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.686968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.513 [2024-10-01 01:50:00.687495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.513 [2024-10-01 01:50:00.687524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.513 [2024-10-01 01:50:00.687850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.513 [2024-10-01 01:50:00.687864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.687878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.687892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.687907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.687921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.687935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.687948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.687962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.687976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.687991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.688028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.688059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:27.514 [2024-10-01 01:50:00.688088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30304 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30312 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30320 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30328 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30336 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30344 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30352 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30360 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30368 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30376 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30384 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30392 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.688940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30400 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.688953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.688965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.688990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.689012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30408 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.689026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.689040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.689051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.689062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30416 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.689075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.689089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.689100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.689111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30424 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.689124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.689137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.689148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.689160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30432 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.689172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.689185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.689196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.689207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30440 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.689220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.514 [2024-10-01 01:50:00.689233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.514 [2024-10-01 01:50:00.689243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.514 [2024-10-01 01:50:00.689258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30448 len:8 PRP1 0x0 PRP2 0x0 00:32:27.514 [2024-10-01 01:50:00.689271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30456 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30464 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30472 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30480 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30488 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30496 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30504 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30512 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30520 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30528 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30536 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30544 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30552 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.689946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.689956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.689967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30560 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.689994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30568 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30576 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30584 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30592 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30600 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30608 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30616 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.515 [2024-10-01 01:50:00.690401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30624 len:8 PRP1 0x0 PRP2 0x0 00:32:27.515 [2024-10-01 01:50:00.690413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.515 [2024-10-01 01:50:00.690426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.515 [2024-10-01 01:50:00.690436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30632 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30640 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30648 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30656 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30664 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29672 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29680 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29688 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29696 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29704 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.690960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29712 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.690972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.690985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.690995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29720 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29648 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29728 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29736 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29744 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29752 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29760 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29768 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29776 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29784 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.691525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.691536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29792 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.691549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.691561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.516 [2024-10-01 01:50:00.700449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.516 [2024-10-01 01:50:00.700480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29800 len:8 PRP1 0x0 PRP2 0x0 00:32:27.516 [2024-10-01 01:50:00.700496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.516 [2024-10-01 01:50:00.700513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29808 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29816 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29824 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29832 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29840 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29848 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29856 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29864 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.700935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.700946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.700961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29872 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.700975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29880 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29888 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29896 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29904 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29912 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29920 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29928 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29936 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29944 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29952 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29960 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29968 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.517 [2024-10-01 01:50:00.701645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29976 len:8 PRP1 0x0 PRP2 0x0 00:32:27.517 [2024-10-01 01:50:00.701657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.517 [2024-10-01 01:50:00.701669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.517 [2024-10-01 01:50:00.701680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.701691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29984 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.701703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.701716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.701726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.701737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29992 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.701765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.701781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.701792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.701803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30000 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.701815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.701827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.701837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.701848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30008 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.701859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.701872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.701882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.701893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30016 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.701905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.701917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.701927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.701938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30024 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.701950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.701963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.701988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30032 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30040 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30048 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30056 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30064 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30072 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30080 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30088 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30096 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30104 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30112 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30120 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30128 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.518 [2024-10-01 01:50:00.702640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.518 [2024-10-01 01:50:00.702651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30136 len:8 PRP1 0x0 PRP2 0x0 00:32:27.518 [2024-10-01 01:50:00.702663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.518 [2024-10-01 01:50:00.702677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.702687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.702698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30144 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.702710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.702723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.702734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.702745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29656 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.702764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.702778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.702788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.702800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29664 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.702812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.702825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.702835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.702846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30152 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.702858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.702888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.702898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.702908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30160 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.702920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.702939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.702951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.702961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30168 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.702974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30176 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30184 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30192 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30200 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30208 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30216 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30224 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30232 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30240 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30248 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30256 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30264 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30272 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30280 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30288 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.519 [2024-10-01 01:50:00.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30296 len:8 PRP1 0x0 PRP2 0x0 00:32:27.519 [2024-10-01 01:50:00.703799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.519 [2024-10-01 01:50:00.703812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:27.519 [2024-10-01 01:50:00.703823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:27.520 [2024-10-01 01:50:00.703835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30304 len:8 PRP1 0x0 PRP2 0x0 00:32:27.520 [2024-10-01 01:50:00.703847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.520 [2024-10-01 01:50:00.703925] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22def00 was disconnected and freed. reset controller. 00:32:27.520 [2024-10-01 01:50:00.703943] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:27.520 [2024-10-01 01:50:00.704020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.520 [2024-10-01 01:50:00.704054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.520 [2024-10-01 01:50:00.704071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.520 [2024-10-01 01:50:00.704084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.520 [2024-10-01 01:50:00.704098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.520 [2024-10-01 01:50:00.704111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.520 [2024-10-01 01:50:00.704125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.520 [2024-10-01 01:50:00.704138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.520 [2024-10-01 01:50:00.704150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.520 [2024-10-01 01:50:00.704199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bbf90 (9): Bad file descriptor 00:32:27.520 [2024-10-01 01:50:00.707542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.520 [2024-10-01 01:50:00.751152] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:27.520 8161.60 IOPS, 31.88 MiB/s 8187.45 IOPS, 31.98 MiB/s 8209.75 IOPS, 32.07 MiB/s 8229.15 IOPS, 32.15 MiB/s 8245.79 IOPS, 32.21 MiB/s 8257.20 IOPS, 32.25 MiB/s 00:32:27.520 Latency(us) 00:32:27.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:27.520 Verification LBA range: start 0x0 length 0x4000 00:32:27.520 NVMe0n1 : 15.00 8256.55 32.25 618.73 0.00 14393.90 433.87 27573.67 00:32:27.520 =================================================================================================================== 00:32:27.520 Total : 8256.55 32.25 618.73 0.00 14393.90 433.87 27573.67 00:32:27.520 Received shutdown signal, test time was about 15.000000 seconds 00:32:27.520 00:32:27.520 Latency(us) 00:32:27.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.520 =================================================================================================================== 00:32:27.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1023048 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1023048 /var/tmp/bdevperf.sock 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1023048 ']' 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:27.520 01:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:27.520 [2024-10-01 01:50:07.014933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:27.520 01:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:27.520 [2024-10-01 01:50:07.275594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:27.520 01:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.086 NVMe0n1 00:32:28.086 01:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.651 00:32:28.651 01:50:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.909 00:32:28.909 01:50:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:28.909 01:50:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:29.167 01:50:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:29.423 01:50:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:32.703 01:50:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:32.703 01:50:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:32.703 01:50:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1023756 00:32:32.703 01:50:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:32.703 01:50:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1023756 00:32:34.106 { 00:32:34.106 "results": [ 00:32:34.106 { 00:32:34.106 "job": "NVMe0n1", 00:32:34.106 "core_mask": "0x1", 00:32:34.106 "workload": "verify", 00:32:34.106 "status": "finished", 00:32:34.106 "verify_range": { 00:32:34.106 "start": 0, 00:32:34.106 "length": 16384 00:32:34.106 }, 00:32:34.106 "queue_depth": 128, 00:32:34.106 "io_size": 4096, 00:32:34.106 "runtime": 1.005341, 00:32:34.106 "iops": 8544.36454894409, 00:32:34.106 "mibps": 33.37642401931285, 00:32:34.106 "io_failed": 0, 00:32:34.106 "io_timeout": 0, 00:32:34.106 "avg_latency_us": 14922.4896974087, 00:32:34.106 "min_latency_us": 2827.757037037037, 00:32:34.106 "max_latency_us": 18155.89925925926 00:32:34.106 } 00:32:34.106 ], 00:32:34.106 "core_count": 1 00:32:34.106 } 00:32:34.106 01:50:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:34.106 [2024-10-01 01:50:06.508191] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:34.106 [2024-10-01 01:50:06.508277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023048 ] 00:32:34.106 [2024-10-01 01:50:06.568549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.106 [2024-10-01 01:50:06.653252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.106 [2024-10-01 01:50:09.087637] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:34.106 [2024-10-01 01:50:09.087707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.107 [2024-10-01 01:50:09.087745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.107 [2024-10-01 01:50:09.087763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.107 [2024-10-01 01:50:09.087777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.107 [2024-10-01 01:50:09.087792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.107 [2024-10-01 01:50:09.087806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.107 [2024-10-01 01:50:09.087821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.107 [2024-10-01 01:50:09.087835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.107 [2024-10-01 01:50:09.087850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:34.107 [2024-10-01 01:50:09.087895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:34.107 [2024-10-01 01:50:09.087926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1828f90 (9): Bad file descriptor 00:32:34.107 [2024-10-01 01:50:09.108656] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:34.107 Running I/O for 1 seconds... 00:32:34.107 8462.00 IOPS, 33.05 MiB/s 00:32:34.107 Latency(us) 00:32:34.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.107 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:34.107 Verification LBA range: start 0x0 length 0x4000 00:32:34.107 NVMe0n1 : 1.01 8544.36 33.38 0.00 0.00 14922.49 2827.76 18155.90 00:32:34.107 =================================================================================================================== 00:32:34.107 Total : 8544.36 33.38 0.00 0.00 14922.49 2827.76 18155.90 00:32:34.107 01:50:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:34.107 01:50:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:34.107 01:50:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:34.392 01:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:34.392 01:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:34.650 01:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:34.907 01:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1023048 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1023048 ']' 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1023048 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1023048 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1023048' 00:32:38.187 killing process with pid 1023048 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1023048 00:32:38.187 01:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1023048 00:32:38.445 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:38.445 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.703 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.704 rmmod nvme_tcp 00:32:38.704 rmmod nvme_fabrics 00:32:38.704 rmmod nvme_keyring 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 1020892 ']' 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 1020892 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1020892 ']' 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1020892 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1020892 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1020892' 00:32:38.704 killing process with pid 1020892 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1020892 00:32:38.704 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1020892 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.963 01:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.497 00:32:41.497 real 0m35.632s 00:32:41.497 user 2m5.873s 00:32:41.497 sys 0m5.755s 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:41.497 ************************************ 00:32:41.497 END TEST nvmf_failover 00:32:41.497 ************************************ 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.497 ************************************ 00:32:41.497 START TEST nvmf_host_discovery 00:32:41.497 ************************************ 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:41.497 * Looking for test storage... 00:32:41.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:32:41.497 01:50:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:41.497 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:41.497 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:41.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.498 --rc genhtml_branch_coverage=1 00:32:41.498 --rc genhtml_function_coverage=1 00:32:41.498 --rc genhtml_legend=1 00:32:41.498 --rc geninfo_all_blocks=1 00:32:41.498 --rc geninfo_unexecuted_blocks=1 00:32:41.498 00:32:41.498 ' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:41.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.498 --rc genhtml_branch_coverage=1 00:32:41.498 --rc genhtml_function_coverage=1 00:32:41.498 --rc genhtml_legend=1 00:32:41.498 --rc geninfo_all_blocks=1 00:32:41.498 --rc geninfo_unexecuted_blocks=1 00:32:41.498 00:32:41.498 ' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:41.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.498 --rc genhtml_branch_coverage=1 00:32:41.498 --rc genhtml_function_coverage=1 00:32:41.498 --rc genhtml_legend=1 00:32:41.498 --rc geninfo_all_blocks=1 00:32:41.498 --rc geninfo_unexecuted_blocks=1 00:32:41.498 00:32:41.498 ' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:41.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.498 --rc genhtml_branch_coverage=1 00:32:41.498 --rc genhtml_function_coverage=1 00:32:41.498 --rc genhtml_legend=1 00:32:41.498 --rc geninfo_all_blocks=1 00:32:41.498 --rc geninfo_unexecuted_blocks=1 00:32:41.498 00:32:41.498 ' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:41.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:41.498 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.499 01:50:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.400 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:43.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:43.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:43.401 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:43.401 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.401 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:32:43.660 00:32:43.660 --- 10.0.0.2 ping statistics --- 00:32:43.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.660 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:32:43.660 00:32:43.660 --- 10.0.0.1 ping statistics --- 00:32:43.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.660 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=1026447 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 1026447 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1026447 ']' 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:43.660 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.660 [2024-10-01 01:50:23.349671] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:43.660 [2024-10-01 01:50:23.349777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.660 [2024-10-01 01:50:23.416480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.660 [2024-10-01 01:50:23.503425] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.660 [2024-10-01 01:50:23.503495] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.660 [2024-10-01 01:50:23.503524] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.660 [2024-10-01 01:50:23.503545] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.660 [2024-10-01 01:50:23.503555] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.660 [2024-10-01 01:50:23.503582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.918 [2024-10-01 01:50:23.650741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.918 [2024-10-01 01:50:23.658984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.918 null0 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.918 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.918 null1 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1026472 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1026472 /tmp/host.sock 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1026472 ']' 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:43.919 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:43.919 01:50:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.919 [2024-10-01 01:50:23.739155] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:43.919 [2024-10-01 01:50:23.739239] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026472 ] 00:32:44.177 [2024-10-01 01:50:23.807962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.177 [2024-10-01 01:50:23.901814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.177 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.434 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.435 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 [2024-10-01 01:50:24.296690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:44.693 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:44.694 01:50:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:45.258 [2024-10-01 01:50:25.090155] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:45.258 [2024-10-01 01:50:25.090185] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:45.258 [2024-10-01 01:50:25.090209] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:45.515 [2024-10-01 01:50:25.177511] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:45.515 [2024-10-01 01:50:25.240227] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:45.515 [2024-10-01 01:50:25.240250] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.773 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.031 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.288 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.288 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:46.288 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:46.288 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:46.288 01:50:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.219 [2024-10-01 01:50:26.968679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:47.219 [2024-10-01 01:50:26.969508] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:47.219 [2024-10-01 01:50:26.969557] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:47.219 01:50:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:47.219 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.476 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:47.476 01:50:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:47.476 [2024-10-01 01:50:27.095456] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:47.733 [2024-10-01 01:50:27.358057] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:47.733 [2024-10-01 01:50:27.358094] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:47.733 [2024-10-01 01:50:27.358103] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.298 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.557 [2024-10-01 01:50:28.189328] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:48.557 [2024-10-01 01:50:28.189376] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:48.557 [2024-10-01 01:50:28.194252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.557 [2024-10-01 01:50:28.194286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.557 [2024-10-01 01:50:28.194302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.557 [2024-10-01 01:50:28.194315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.557 [2024-10-01 01:50:28.194346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.557 [2024-10-01 01:50:28.194361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.557 [2024-10-01 01:50:28.194377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:48.557 [2024-10-01 01:50:28.194392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:48.557 [2024-10-01 01:50:28.194406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:48.557 [2024-10-01 01:50:28.204249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.557 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.558 [2024-10-01 01:50:28.214311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.214551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.214584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.214604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.214630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.214666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.214687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.214704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.214728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 [2024-10-01 01:50:28.224397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.224589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.224616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.224633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.224655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.224675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.224688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.224701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.224733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 [2024-10-01 01:50:28.234478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.234707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.234740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.234759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.234787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.234840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.234863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.234887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.234909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:48.558 [2024-10-01 01:50:28.244563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.244744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.244776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.244795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.244820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.244843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.244859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.244873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.244911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 [2024-10-01 01:50:28.254644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.254831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.254864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.254882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.254907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.254943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.254963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.254979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.255012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 [2024-10-01 01:50:28.264727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.264913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.264942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.264964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.264988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.265033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.265052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.265066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.265099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.558 [2024-10-01 01:50:28.274803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:48.558 [2024-10-01 01:50:28.275013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.558 [2024-10-01 01:50:28.275043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x887850 with addr=10.0.0.2, port=4420 00:32:48.558 [2024-10-01 01:50:28.275060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887850 is same with the state(6) to be set 00:32:48.558 [2024-10-01 01:50:28.275083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x887850 (9): Bad file descriptor 00:32:48.558 [2024-10-01 01:50:28.275115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:48.558 [2024-10-01 01:50:28.275133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:48.558 [2024-10-01 01:50:28.275146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:48.558 [2024-10-01 01:50:28.275166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.558 [2024-10-01 01:50:28.277128] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:48.558 [2024-10-01 01:50:28.277159] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.558 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:48.559 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.818 01:50:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.754 [2024-10-01 01:50:29.541735] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:49.754 [2024-10-01 01:50:29.541763] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:49.754 [2024-10-01 01:50:29.541788] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:50.012 [2024-10-01 01:50:29.669217] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:50.012 [2024-10-01 01:50:29.736140] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:50.013 [2024-10-01 01:50:29.736174] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.013 request: 00:32:50.013 { 00:32:50.013 "name": "nvme", 00:32:50.013 "trtype": "tcp", 00:32:50.013 "traddr": "10.0.0.2", 00:32:50.013 "adrfam": "ipv4", 00:32:50.013 "trsvcid": "8009", 00:32:50.013 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:50.013 "wait_for_attach": true, 00:32:50.013 "method": "bdev_nvme_start_discovery", 00:32:50.013 "req_id": 1 00:32:50.013 } 00:32:50.013 Got JSON-RPC error response 00:32:50.013 response: 00:32:50.013 { 00:32:50.013 "code": -17, 00:32:50.013 "message": "File exists" 00:32:50.013 } 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.013 request: 00:32:50.013 { 00:32:50.013 "name": "nvme_second", 00:32:50.013 "trtype": "tcp", 00:32:50.013 "traddr": "10.0.0.2", 00:32:50.013 "adrfam": "ipv4", 00:32:50.013 "trsvcid": "8009", 00:32:50.013 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:50.013 "wait_for_attach": true, 00:32:50.013 "method": "bdev_nvme_start_discovery", 00:32:50.013 "req_id": 1 00:32:50.013 } 00:32:50.013 Got JSON-RPC error response 00:32:50.013 response: 00:32:50.013 { 00:32:50.013 "code": -17, 00:32:50.013 "message": "File exists" 00:32:50.013 } 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:50.013 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:50.272 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.273 01:50:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.207 [2024-10-01 01:50:30.951609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:51.207 [2024-10-01 01:50:30.951679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c78a0 with addr=10.0.0.2, port=8010 00:32:51.207 [2024-10-01 01:50:30.951711] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:51.207 [2024-10-01 01:50:30.951736] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:51.207 [2024-10-01 01:50:30.951750] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:52.140 [2024-10-01 01:50:31.953994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.140 [2024-10-01 01:50:31.954065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c78a0 with addr=10.0.0.2, port=8010 00:32:52.140 [2024-10-01 01:50:31.954087] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:52.140 [2024-10-01 01:50:31.954100] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:52.140 [2024-10-01 01:50:31.954112] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:53.515 [2024-10-01 01:50:32.956255] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:53.515 request: 00:32:53.515 { 00:32:53.515 "name": "nvme_second", 00:32:53.515 "trtype": "tcp", 00:32:53.515 "traddr": "10.0.0.2", 00:32:53.515 "adrfam": "ipv4", 00:32:53.515 "trsvcid": "8010", 00:32:53.515 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:53.515 "wait_for_attach": false, 00:32:53.515 "attach_timeout_ms": 3000, 00:32:53.515 "method": "bdev_nvme_start_discovery", 00:32:53.515 "req_id": 1 00:32:53.515 } 00:32:53.515 Got JSON-RPC error response 00:32:53.515 response: 00:32:53.515 { 00:32:53.515 "code": -110, 00:32:53.515 "message": "Connection timed out" 00:32:53.515 } 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:53.515 01:50:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1026472 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:53.515 rmmod nvme_tcp 00:32:53.515 rmmod nvme_fabrics 00:32:53.515 rmmod nvme_keyring 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 1026447 ']' 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 1026447 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1026447 ']' 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1026447 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1026447 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1026447' 00:32:53.515 killing process with pid 1026447 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1026447 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1026447 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:53.515 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:32:53.775 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.775 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:53.775 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.775 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.775 01:50:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:55.681 00:32:55.681 real 0m14.550s 00:32:55.681 user 0m21.462s 00:32:55.681 sys 0m3.001s 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:55.681 ************************************ 00:32:55.681 END TEST nvmf_host_discovery 00:32:55.681 ************************************ 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.681 ************************************ 00:32:55.681 START TEST nvmf_host_multipath_status 00:32:55.681 ************************************ 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:55.681 * Looking for test storage... 00:32:55.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:32:55.681 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.941 --rc genhtml_branch_coverage=1 00:32:55.941 --rc genhtml_function_coverage=1 00:32:55.941 --rc genhtml_legend=1 00:32:55.941 --rc geninfo_all_blocks=1 00:32:55.941 --rc geninfo_unexecuted_blocks=1 00:32:55.941 00:32:55.941 ' 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.941 --rc genhtml_branch_coverage=1 00:32:55.941 --rc genhtml_function_coverage=1 00:32:55.941 --rc genhtml_legend=1 00:32:55.941 --rc geninfo_all_blocks=1 00:32:55.941 --rc geninfo_unexecuted_blocks=1 00:32:55.941 00:32:55.941 ' 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.941 --rc genhtml_branch_coverage=1 00:32:55.941 --rc genhtml_function_coverage=1 00:32:55.941 --rc genhtml_legend=1 00:32:55.941 --rc geninfo_all_blocks=1 00:32:55.941 --rc geninfo_unexecuted_blocks=1 00:32:55.941 00:32:55.941 ' 00:32:55.941 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:55.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.941 --rc genhtml_branch_coverage=1 00:32:55.941 --rc genhtml_function_coverage=1 00:32:55.942 --rc genhtml_legend=1 00:32:55.942 --rc geninfo_all_blocks=1 00:32:55.942 --rc geninfo_unexecuted_blocks=1 00:32:55.942 00:32:55.942 ' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:55.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:55.942 01:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:57.846 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:57.846 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:57.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:57.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:57.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:32:57.847 00:32:57.847 --- 10.0.0.2 ping statistics --- 00:32:57.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.847 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:57.847 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:32:58.106 00:32:58.106 --- 10.0.0.1 ping statistics --- 00:32:58.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.106 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=1029758 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 1029758 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1029758 ']' 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:58.106 01:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:58.106 [2024-10-01 01:50:37.776512] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:32:58.106 [2024-10-01 01:50:37.776603] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.106 [2024-10-01 01:50:37.842350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:58.106 [2024-10-01 01:50:37.930080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.106 [2024-10-01 01:50:37.930147] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.106 [2024-10-01 01:50:37.930172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.106 [2024-10-01 01:50:37.930186] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.106 [2024-10-01 01:50:37.930199] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.106 [2024-10-01 01:50:37.930283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.106 [2024-10-01 01:50:37.930291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1029758 00:32:58.364 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:58.621 [2024-10-01 01:50:38.376167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.621 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:58.880 Malloc0 00:32:58.880 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:59.138 01:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:59.701 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.959 [2024-10-01 01:50:39.601634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.959 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:00.217 [2024-10-01 01:50:39.886462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:00.217 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1030042 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1030042 /var/tmp/bdevperf.sock 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1030042 ']' 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:00.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.218 01:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:00.476 01:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.476 01:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:00.476 01:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:00.734 01:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:01.299 Nvme0n1 00:33:01.300 01:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:01.866 Nvme0n1 00:33:01.866 01:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:01.866 01:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:03.763 01:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:03.763 01:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:04.021 01:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:04.279 01:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.652 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.910 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.910 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.910 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.910 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:06.167 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.167 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:06.167 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.167 01:50:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.426 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.426 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:06.426 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.426 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.684 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.684 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:06.684 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.684 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.942 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.942 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:06.942 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:07.199 01:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:07.456 01:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:08.448 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:08.448 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:08.448 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.448 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:08.705 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.705 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:08.705 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.705 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:08.963 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.963 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.963 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.963 01:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:09.238 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.238 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:09.238 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.238 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:09.494 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.494 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:09.494 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.494 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:10.058 01:50:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:10.622 01:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:10.622 01:50:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.995 01:50:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.253 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.253 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.253 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.253 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.512 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.512 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.512 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.512 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.770 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.770 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:12.770 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.770 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.028 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.028 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:13.028 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.029 01:50:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.287 01:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.287 01:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:13.287 01:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:13.854 01:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:13.854 01:50:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.230 01:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.488 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.488 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.488 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.488 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.747 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.747 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.747 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.747 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.005 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.005 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.005 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.005 01:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.263 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.263 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:16.263 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.263 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.521 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.521 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:16.521 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:16.779 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:17.038 01:50:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:18.410 01:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:18.410 01:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:18.410 01:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.410 01:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.410 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.410 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:18.410 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.411 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.668 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.668 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.668 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.668 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.927 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.927 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.927 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.927 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.185 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.185 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:19.185 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.185 01:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.442 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.442 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:19.442 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.442 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.699 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.699 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:19.699 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:19.957 01:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:20.215 01:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.588 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.846 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.846 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.846 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.846 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:22.104 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.104 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:22.104 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.104 01:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.671 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.671 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:22.671 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.671 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.929 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.929 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:22.929 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.929 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:23.187 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.187 01:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:23.445 01:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:23.445 01:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:23.704 01:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:23.962 01:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:24.896 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:24.896 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:24.896 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.896 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:25.154 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.154 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:25.154 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.154 01:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:25.412 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.412 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:25.412 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.412 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:25.670 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.670 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:25.670 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.670 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.237 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.237 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:26.237 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.237 01:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:26.495 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.495 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:26.495 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.495 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:26.754 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.754 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:26.754 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:27.011 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:27.268 01:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:28.201 01:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:28.201 01:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:28.201 01:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.201 01:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:28.460 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.460 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:28.460 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.460 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:28.718 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.718 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:28.718 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.718 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.283 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.283 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.283 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.283 01:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:29.542 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.542 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:29.542 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.542 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:29.799 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.799 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:29.799 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.799 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.057 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.057 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:30.057 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:30.314 01:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:30.571 01:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:31.503 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:31.503 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:31.503 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.503 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:31.761 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.761 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:31.761 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.761 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.018 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.018 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.018 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.018 01:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:32.276 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.276 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:32.276 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.276 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:32.534 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.534 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:32.534 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.534 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:33.100 01:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:33.358 01:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:33.924 01:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:34.858 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:34.858 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:34.859 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.859 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.117 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.117 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:35.117 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.117 01:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:35.376 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.376 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:35.376 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.376 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:35.634 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.634 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:35.634 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.634 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:35.893 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.893 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:35.893 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.893 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.151 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.151 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:36.151 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.151 01:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1030042 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1030042 ']' 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1030042 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030042 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030042' 00:33:36.410 killing process with pid 1030042 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1030042 00:33:36.410 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1030042 00:33:36.410 { 00:33:36.410 "results": [ 00:33:36.410 { 00:33:36.410 "job": "Nvme0n1", 00:33:36.410 "core_mask": "0x4", 00:33:36.410 "workload": "verify", 00:33:36.410 "status": "terminated", 00:33:36.410 "verify_range": { 00:33:36.410 "start": 0, 00:33:36.410 "length": 16384 00:33:36.410 }, 00:33:36.410 "queue_depth": 128, 00:33:36.410 "io_size": 4096, 00:33:36.410 "runtime": 34.485358, 00:33:36.410 "iops": 7898.192618444036, 00:33:36.410 "mibps": 30.852314915797017, 00:33:36.410 "io_failed": 0, 00:33:36.410 "io_timeout": 0, 00:33:36.410 "avg_latency_us": 16179.813282379057, 00:33:36.410 "min_latency_us": 391.3955555555556, 00:33:36.410 "max_latency_us": 4026531.84 00:33:36.410 } 00:33:36.410 ], 00:33:36.410 "core_count": 1 00:33:36.410 } 00:33:36.685 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1030042 00:33:36.685 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:36.685 [2024-10-01 01:50:39.955851] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:33:36.685 [2024-10-01 01:50:39.955953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030042 ] 00:33:36.685 [2024-10-01 01:50:40.020379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.685 [2024-10-01 01:50:40.113852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.685 [2024-10-01 01:50:41.437604] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:33:36.685 Running I/O for 90 seconds... 00:33:36.685 8691.00 IOPS, 33.95 MiB/s 8704.50 IOPS, 34.00 MiB/s 8682.00 IOPS, 33.91 MiB/s 8672.50 IOPS, 33.88 MiB/s 8655.00 IOPS, 33.81 MiB/s 8591.33 IOPS, 33.56 MiB/s 8487.29 IOPS, 33.15 MiB/s 8405.25 IOPS, 32.83 MiB/s 8337.11 IOPS, 32.57 MiB/s 8388.90 IOPS, 32.77 MiB/s 8428.18 IOPS, 32.92 MiB/s 8445.75 IOPS, 32.99 MiB/s 8462.00 IOPS, 33.05 MiB/s 8481.57 IOPS, 33.13 MiB/s [2024-10-01 01:50:56.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.570873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.570954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.570977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.571011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.571030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.571054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.571073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.571096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.571112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.571135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.571151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.571174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.571191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.571214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.571230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.572301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.572365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.572422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.572462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.685 [2024-10-01 01:50:56.572501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-10-01 01:50:56.572541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-10-01 01:50:56.572581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-10-01 01:50:56.572635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-10-01 01:50:56.572674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-10-01 01:50:56.572713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.685 [2024-10-01 01:50:56.572751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.685 [2024-10-01 01:50:56.572773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.572789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.572812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.572828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.572856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.572873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.572895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.572911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.572933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.572949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.572972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.686 [2024-10-01 01:50:56.573241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.573941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.573965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.686 [2024-10-01 01:50:56.574369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:36.686 [2024-10-01 01:50:56.574393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.574943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.574970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.575960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.575992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.576032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.576050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.576077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.576094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.576120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.576137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:36.687 [2024-10-01 01:50:56.576164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.687 [2024-10-01 01:50:56.576181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.576852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.688 [2024-10-01 01:50:56.576894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.688 [2024-10-01 01:50:56.576937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.576962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.688 [2024-10-01 01:50:56.576993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:50:56.577696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.688 [2024-10-01 01:50:56.577713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:36.688 8433.07 IOPS, 32.94 MiB/s 7906.00 IOPS, 30.88 MiB/s 7440.94 IOPS, 29.07 MiB/s 7027.56 IOPS, 27.45 MiB/s 6692.63 IOPS, 26.14 MiB/s 6745.75 IOPS, 26.35 MiB/s 6788.71 IOPS, 26.52 MiB/s 6874.45 IOPS, 26.85 MiB/s 7064.78 IOPS, 27.60 MiB/s 7242.58 IOPS, 28.29 MiB/s 7406.48 IOPS, 28.93 MiB/s 7425.04 IOPS, 29.00 MiB/s 7444.30 IOPS, 29.08 MiB/s 7457.11 IOPS, 29.13 MiB/s 7513.59 IOPS, 29.35 MiB/s 7639.07 IOPS, 29.84 MiB/s 7757.58 IOPS, 30.30 MiB/s [2024-10-01 01:51:13.459065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.688 [2024-10-01 01:51:13.459160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.688 [2024-10-01 01:51:13.459215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.688 [2024-10-01 01:51:13.459235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.459632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.459670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.459714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.459754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.459882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.459921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.459960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.459986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.460012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.460053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.460092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.460131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.460170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.689 [2024-10-01 01:51:13.460213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.460254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.460277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.460294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.461666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.461713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.461753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.461792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.461831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.689 [2024-10-01 01:51:13.461870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:36.689 [2024-10-01 01:51:13.461893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.461910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.461933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.461949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.461971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.461988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.690 [2024-10-01 01:51:13.462717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.462975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.462991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.463023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.463045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.463068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.463085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:36.690 [2024-10-01 01:51:13.463107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.690 [2024-10-01 01:51:13.463123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.463161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.463870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.463959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.463982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.691 [2024-10-01 01:51:13.464681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.691 [2024-10-01 01:51:13.464836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.691 [2024-10-01 01:51:13.464859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.464875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.464897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.464914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.464936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.464953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.464980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.465004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.465029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.465047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.465070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.465087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.465109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.465126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.465153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.465170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.465194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.465211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.465235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.465252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.692 [2024-10-01 01:51:13.467893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.467959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.467976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.468613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.468638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.468666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.468685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.468707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.468724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.692 [2024-10-01 01:51:13.468746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.692 [2024-10-01 01:51:13.468763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.468784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.468801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.468823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.468844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.468867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.468884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.468906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.468922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.468944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.468960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.468982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.469653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.469714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.469730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.471313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.471351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.471389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.471466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.471505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.693 [2024-10-01 01:51:13.471543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.693 [2024-10-01 01:51:13.471565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.693 [2024-10-01 01:51:13.471582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.471620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.471663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.471703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.471741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.471780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.471818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.471856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.471895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.471933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.471971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.471993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.472452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.472513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.472530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.473644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.694 [2024-10-01 01:51:13.473753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.473792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.473831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.473869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.473908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.694 [2024-10-01 01:51:13.473946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:36.694 [2024-10-01 01:51:13.473968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.473984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.474014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.474033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.475603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.475641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.475687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.475726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.475879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.475962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.475985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.695 [2024-10-01 01:51:13.476756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.476778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.476795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.477953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.478006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:36.695 [2024-10-01 01:51:13.478039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.695 [2024-10-01 01:51:13.478058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.478099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.478137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.478177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.478217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.478256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.478294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.478340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.478394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.478417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.478434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.479928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.479951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.479968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.480562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.480588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.480616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.696 [2024-10-01 01:51:13.480634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.480657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.480676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.480698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.480714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:36.696 [2024-10-01 01:51:13.480737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.696 [2024-10-01 01:51:13.480754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.480776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.480792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.480816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.480833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.480854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.480871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.480894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.480910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.480933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.480951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.480972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.480994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.481085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.481124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.481893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.481931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.481957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.481993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.482047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.482085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.482123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.482161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.482199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.482238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.482276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.482298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.482315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.484043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.484089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.484129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.484168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.484214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.484253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.484307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.484345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.484383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.697 [2024-10-01 01:51:13.484419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.484457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:36.697 [2024-10-01 01:51:13.484478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.697 [2024-10-01 01:51:13.484493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.484684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.484721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.484817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.484834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.487831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.487874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.487949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.487971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.488015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.488058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.488096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.488287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.488408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.488447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.488486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.488979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.698 [2024-10-01 01:51:13.489011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.489041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.489059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.489082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.489099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.489120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.698 [2024-10-01 01:51:13.489137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.698 [2024-10-01 01:51:13.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.489790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.489818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.489835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.491054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.491100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.491257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.491295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.699 [2024-10-01 01:51:13.491334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:36.699 [2024-10-01 01:51:13.491633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.699 [2024-10-01 01:51:13.491650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.491688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.491742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.491781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.491817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.491855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.491891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.491929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.491950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.491970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.492037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.492306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.492360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.492398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.492419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.492435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.494963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.495127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.495166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.495205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.495243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.700 [2024-10-01 01:51:13.495660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.700 [2024-10-01 01:51:13.495734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.700 [2024-10-01 01:51:13.495755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.495786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.495810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.495826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.495848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.495864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.495886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.495902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.495925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.495943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.495966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.495983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.496278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.496316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.496339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.496355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.497524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.497587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.497625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.497952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.497973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.498014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.498056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.701 [2024-10-01 01:51:13.498791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.498829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.498866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:36.701 [2024-10-01 01:51:13.498887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.701 [2024-10-01 01:51:13.498903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.498924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.498940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.498961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.498992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.499049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.499253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.499330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.499374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.499413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.499528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.499551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.499568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.500533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.500594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.500644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.500682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.500720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.500756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.500793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.500830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.500867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.500923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.500962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.500984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.501008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.501035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.501064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.501087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.501104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.501127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.501149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.501172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.702 [2024-10-01 01:51:13.501189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.501211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.501228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.502339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.502389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.502417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.502435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.502473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.502491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.502513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.702 [2024-10-01 01:51:13.502535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:36.702 [2024-10-01 01:51:13.502558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.502651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.502727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.502892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.502952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.502968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.503033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.503089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.503381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.503420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.503996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.504030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.504058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.504076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.504100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.504116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.504138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.504155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.504177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.504193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.504215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.504231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.506818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.506844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.506889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.506908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.506930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.506947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.506974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.506992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.507041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.507079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.507123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.507162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.703 [2024-10-01 01:51:13.507201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.507239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:36.703 [2024-10-01 01:51:13.507260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.703 [2024-10-01 01:51:13.507277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.507639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.507677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.507731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.507845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.507882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.507918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.507940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.507960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.508024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.508065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.508103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.508141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.508457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.508473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.509509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.509550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.509577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.704 [2024-10-01 01:51:13.509610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.509634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.509651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.509673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.704 [2024-10-01 01:51:13.509690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:36.704 [2024-10-01 01:51:13.509711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.509727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.509748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.509766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.509788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.509804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.510884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.510909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.510953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.510973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.511557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.511702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.511718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.512909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.512934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.512976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.512994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.513063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.513179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.513218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.513420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.705 [2024-10-01 01:51:13.513457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.705 [2024-10-01 01:51:13.513495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:36.705 [2024-10-01 01:51:13.513517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.513533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.513571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.513608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.513646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.513684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.513737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.513775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.513815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.513854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.513900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.513937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.513957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.513973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.514526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.514585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.514601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.516807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.516835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.516883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.516905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.516929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.516947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.516970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.516986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.517038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.517084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.517124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.706 [2024-10-01 01:51:13.517162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.517201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.517239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:36.706 [2024-10-01 01:51:13.517261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.706 [2024-10-01 01:51:13.517277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:36.707 [2024-10-01 01:51:13.517299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.707 [2024-10-01 01:51:13.517315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.707 [2024-10-01 01:51:13.517337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.707 [2024-10-01 01:51:13.517353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:36.707 [2024-10-01 01:51:13.517375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.707 [2024-10-01 01:51:13.517391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:36.707 [2024-10-01 01:51:13.517414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.707 [2024-10-01 01:51:13.517430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:36.707 7841.72 IOPS, 30.63 MiB/s 7866.36 IOPS, 30.73 MiB/s 7889.91 IOPS, 30.82 MiB/s Received shutdown signal, test time was about 34.486142 seconds 00:33:36.707 00:33:36.707 Latency(us) 00:33:36.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.707 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:36.707 Verification LBA range: start 0x0 length 0x4000 00:33:36.707 Nvme0n1 : 34.49 7898.19 30.85 0.00 0.00 16179.81 391.40 4026531.84 00:33:36.707 =================================================================================================================== 00:33:36.707 Total : 7898.19 30.85 0.00 0.00 16179.81 391.40 4026531.84 00:33:36.707 [2024-10-01 01:51:16.189666] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:33:36.707 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.970 rmmod nvme_tcp 00:33:36.970 rmmod nvme_fabrics 00:33:36.970 rmmod nvme_keyring 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 1029758 ']' 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 1029758 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1029758 ']' 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1029758 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029758 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029758' 00:33:36.970 killing process with pid 1029758 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1029758 00:33:36.970 01:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1029758 00:33:37.277 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:37.277 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.278 01:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.841 00:33:39.841 real 0m43.640s 00:33:39.841 user 2m7.921s 00:33:39.841 sys 0m13.140s 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:39.841 ************************************ 00:33:39.841 END TEST nvmf_host_multipath_status 00:33:39.841 ************************************ 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.841 ************************************ 00:33:39.841 START TEST nvmf_discovery_remove_ifc 00:33:39.841 ************************************ 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:39.841 * Looking for test storage... 00:33:39.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:39.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.841 --rc genhtml_branch_coverage=1 00:33:39.841 --rc genhtml_function_coverage=1 00:33:39.841 --rc genhtml_legend=1 00:33:39.841 --rc geninfo_all_blocks=1 00:33:39.841 --rc geninfo_unexecuted_blocks=1 00:33:39.841 00:33:39.841 ' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:39.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.841 --rc genhtml_branch_coverage=1 00:33:39.841 --rc genhtml_function_coverage=1 00:33:39.841 --rc genhtml_legend=1 00:33:39.841 --rc geninfo_all_blocks=1 00:33:39.841 --rc geninfo_unexecuted_blocks=1 00:33:39.841 00:33:39.841 ' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:39.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.841 --rc genhtml_branch_coverage=1 00:33:39.841 --rc genhtml_function_coverage=1 00:33:39.841 --rc genhtml_legend=1 00:33:39.841 --rc geninfo_all_blocks=1 00:33:39.841 --rc geninfo_unexecuted_blocks=1 00:33:39.841 00:33:39.841 ' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:39.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.841 --rc genhtml_branch_coverage=1 00:33:39.841 --rc genhtml_function_coverage=1 00:33:39.841 --rc genhtml_legend=1 00:33:39.841 --rc geninfo_all_blocks=1 00:33:39.841 --rc geninfo_unexecuted_blocks=1 00:33:39.841 00:33:39.841 ' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.841 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:39.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.842 01:51:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:41.747 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:41.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:41.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:41.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:41.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.748 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:41.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:33:41.749 00:33:41.749 --- 10.0.0.2 ping statistics --- 00:33:41.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.749 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:33:41.749 00:33:41.749 --- 10.0.0.1 ping statistics --- 00:33:41.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.749 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=1037056 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 1037056 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1037056 ']' 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:41.749 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.749 [2024-10-01 01:51:21.563773] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:33:41.749 [2024-10-01 01:51:21.563867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.008 [2024-10-01 01:51:21.634088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.008 [2024-10-01 01:51:21.725389] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.008 [2024-10-01 01:51:21.725449] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.008 [2024-10-01 01:51:21.725477] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.008 [2024-10-01 01:51:21.725491] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.008 [2024-10-01 01:51:21.725503] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.008 [2024-10-01 01:51:21.725535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.008 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.008 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:42.008 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:42.008 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.008 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.267 [2024-10-01 01:51:21.882318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.267 [2024-10-01 01:51:21.890566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:42.267 null0 00:33:42.267 [2024-10-01 01:51:21.922475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1037146 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1037146 /tmp/host.sock 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1037146 ']' 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:42.267 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.267 01:51:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.267 [2024-10-01 01:51:21.992463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:33:42.267 [2024-10-01 01:51:21.992555] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037146 ] 00:33:42.267 [2024-10-01 01:51:22.057743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.526 [2024-10-01 01:51:22.151175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.526 01:51:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.900 [2024-10-01 01:51:23.391823] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:43.900 [2024-10-01 01:51:23.391866] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:43.900 [2024-10-01 01:51:23.391894] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:43.900 [2024-10-01 01:51:23.519339] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:43.900 [2024-10-01 01:51:23.704425] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:43.900 [2024-10-01 01:51:23.704503] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:43.900 [2024-10-01 01:51:23.704547] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:43.900 [2024-10-01 01:51:23.704575] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:43.900 [2024-10-01 01:51:23.704611] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.900 [2024-10-01 01:51:23.709224] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f55250 was disconnected and freed. delete nvme_qpair. 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:43.900 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.159 01:51:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:45.093 01:51:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.027 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:46.285 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.285 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:46.285 01:51:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:47.220 01:51:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.154 01:51:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.154 01:51:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:48.155 01:51:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:49.528 01:51:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:49.528 [2024-10-01 01:51:29.145813] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:49.528 [2024-10-01 01:51:29.145891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.528 [2024-10-01 01:51:29.145916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.528 [2024-10-01 01:51:29.145954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.528 [2024-10-01 01:51:29.145971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.528 [2024-10-01 01:51:29.145987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.528 [2024-10-01 01:51:29.146020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.528 [2024-10-01 01:51:29.146051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.528 [2024-10-01 01:51:29.146064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.528 [2024-10-01 01:51:29.146078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.528 [2024-10-01 01:51:29.146091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.528 [2024-10-01 01:51:29.146105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31b00 is same with the state(6) to be set 00:33:49.528 [2024-10-01 01:51:29.155829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f31b00 (9): Bad file descriptor 00:33:49.528 [2024-10-01 01:51:29.165876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.460 [2024-10-01 01:51:30.184079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:50.460 [2024-10-01 01:51:30.184157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f31b00 with addr=10.0.0.2, port=4420 00:33:50.460 [2024-10-01 01:51:30.184187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f31b00 is same with the state(6) to be set 00:33:50.460 [2024-10-01 01:51:30.184245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f31b00 (9): Bad file descriptor 00:33:50.460 [2024-10-01 01:51:30.184748] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:50.460 [2024-10-01 01:51:30.184800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:50.460 [2024-10-01 01:51:30.184821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:50.460 [2024-10-01 01:51:30.184839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:50.460 [2024-10-01 01:51:30.184874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:50.460 [2024-10-01 01:51:30.184895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:50.460 01:51:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:51.393 [2024-10-01 01:51:31.187398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.394 [2024-10-01 01:51:31.187433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.394 [2024-10-01 01:51:31.187450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:51.394 [2024-10-01 01:51:31.187465] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:51.394 [2024-10-01 01:51:31.187489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:51.394 [2024-10-01 01:51:31.187530] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:51.394 [2024-10-01 01:51:31.187570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.394 [2024-10-01 01:51:31.187595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.394 [2024-10-01 01:51:31.187616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.394 [2024-10-01 01:51:31.187632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.394 [2024-10-01 01:51:31.187648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.394 [2024-10-01 01:51:31.187663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.394 [2024-10-01 01:51:31.187680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.394 [2024-10-01 01:51:31.187696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.394 [2024-10-01 01:51:31.187714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:51.394 [2024-10-01 01:51:31.187730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:51.394 [2024-10-01 01:51:31.187746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:51.394 [2024-10-01 01:51:31.187853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21210 (9): Bad file descriptor 00:33:51.394 [2024-10-01 01:51:31.188871] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:51.394 [2024-10-01 01:51:31.188898] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.394 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:51.651 01:51:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:53.023 01:51:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.588 [2024-10-01 01:51:33.246748] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:53.588 [2024-10-01 01:51:33.246794] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:53.588 [2024-10-01 01:51:33.246822] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:53.588 [2024-10-01 01:51:33.375229] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:53.846 01:51:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.846 [2024-10-01 01:51:33.559120] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:53.846 [2024-10-01 01:51:33.559185] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:53.846 [2024-10-01 01:51:33.559219] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:53.846 [2024-10-01 01:51:33.559244] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:53.846 [2024-10-01 01:51:33.559262] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:53.846 [2024-10-01 01:51:33.565470] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f2d420 was disconnected and freed. delete nvme_qpair. 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1037146 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1037146 ']' 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1037146 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1037146 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1037146' 00:33:54.779 killing process with pid 1037146 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1037146 00:33:54.779 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1037146 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.037 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.037 rmmod nvme_tcp 00:33:55.037 rmmod nvme_fabrics 00:33:55.295 rmmod nvme_keyring 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 1037056 ']' 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 1037056 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1037056 ']' 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1037056 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1037056 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1037056' 00:33:55.295 killing process with pid 1037056 00:33:55.295 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1037056 00:33:55.296 01:51:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1037056 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.554 01:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.455 01:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:57.456 00:33:57.456 real 0m18.097s 00:33:57.456 user 0m26.261s 00:33:57.456 sys 0m2.995s 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.456 ************************************ 00:33:57.456 END TEST nvmf_discovery_remove_ifc 00:33:57.456 ************************************ 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.456 ************************************ 00:33:57.456 START TEST nvmf_identify_kernel_target 00:33:57.456 ************************************ 00:33:57.456 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:57.716 * Looking for test storage... 00:33:57.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.716 --rc genhtml_branch_coverage=1 00:33:57.716 --rc genhtml_function_coverage=1 00:33:57.716 --rc genhtml_legend=1 00:33:57.716 --rc geninfo_all_blocks=1 00:33:57.716 --rc geninfo_unexecuted_blocks=1 00:33:57.716 00:33:57.716 ' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.716 --rc genhtml_branch_coverage=1 00:33:57.716 --rc genhtml_function_coverage=1 00:33:57.716 --rc genhtml_legend=1 00:33:57.716 --rc geninfo_all_blocks=1 00:33:57.716 --rc geninfo_unexecuted_blocks=1 00:33:57.716 00:33:57.716 ' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.716 --rc genhtml_branch_coverage=1 00:33:57.716 --rc genhtml_function_coverage=1 00:33:57.716 --rc genhtml_legend=1 00:33:57.716 --rc geninfo_all_blocks=1 00:33:57.716 --rc geninfo_unexecuted_blocks=1 00:33:57.716 00:33:57.716 ' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.716 --rc genhtml_branch_coverage=1 00:33:57.716 --rc genhtml_function_coverage=1 00:33:57.716 --rc genhtml_legend=1 00:33:57.716 --rc geninfo_all_blocks=1 00:33:57.716 --rc geninfo_unexecuted_blocks=1 00:33:57.716 00:33:57.716 ' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.716 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:57.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.717 01:51:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:59.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:59.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:59.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:59.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:59.619 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:59.620 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.878 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.878 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.878 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:59.878 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:59.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:33:59.878 00:33:59.878 --- 10.0.0.2 ping statistics --- 00:33:59.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.878 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:33:59.878 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:33:59.878 00:33:59.879 --- 10.0.0.1 ping statistics --- 00:33:59.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.879 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:59.879 01:51:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.818 Waiting for block devices as requested 00:34:00.818 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:01.078 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:01.078 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:01.338 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:01.338 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:01.338 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:01.338 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:01.599 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:01.599 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:01.599 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:01.599 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:01.858 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:01.858 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:01.858 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:01.858 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:02.116 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:02.116 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:02.116 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:34:02.116 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:02.116 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:34:02.116 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:02.117 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:02.117 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:02.117 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:34:02.117 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:02.117 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:02.376 No valid GPT data, bailing 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:02.376 01:51:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:02.376 00:34:02.376 Discovery Log Number of Records 2, Generation counter 2 00:34:02.376 =====Discovery Log Entry 0====== 00:34:02.376 trtype: tcp 00:34:02.376 adrfam: ipv4 00:34:02.376 subtype: current discovery subsystem 00:34:02.376 treq: not specified, sq flow control disable supported 00:34:02.376 portid: 1 00:34:02.376 trsvcid: 4420 00:34:02.376 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:02.376 traddr: 10.0.0.1 00:34:02.376 eflags: none 00:34:02.376 sectype: none 00:34:02.376 =====Discovery Log Entry 1====== 00:34:02.376 trtype: tcp 00:34:02.376 adrfam: ipv4 00:34:02.376 subtype: nvme subsystem 00:34:02.376 treq: not specified, sq flow control disable supported 00:34:02.376 portid: 1 00:34:02.376 trsvcid: 4420 00:34:02.376 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:02.376 traddr: 10.0.0.1 00:34:02.376 eflags: none 00:34:02.376 sectype: none 00:34:02.376 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:02.376 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:02.376 ===================================================== 00:34:02.376 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:02.376 ===================================================== 00:34:02.376 Controller Capabilities/Features 00:34:02.376 ================================ 00:34:02.376 Vendor ID: 0000 00:34:02.376 Subsystem Vendor ID: 0000 00:34:02.376 Serial Number: 4a969bcc7770f7b104c2 00:34:02.376 Model Number: Linux 00:34:02.376 Firmware Version: 6.8.9-20 00:34:02.376 Recommended Arb Burst: 0 00:34:02.376 IEEE OUI Identifier: 00 00 00 00:34:02.376 Multi-path I/O 00:34:02.376 May have multiple subsystem ports: No 00:34:02.376 May have multiple controllers: No 00:34:02.376 Associated with SR-IOV VF: No 00:34:02.376 Max Data Transfer Size: Unlimited 00:34:02.376 Max Number of Namespaces: 0 00:34:02.376 Max Number of I/O Queues: 1024 00:34:02.376 NVMe Specification Version (VS): 1.3 00:34:02.376 NVMe Specification Version (Identify): 1.3 00:34:02.376 Maximum Queue Entries: 1024 00:34:02.376 Contiguous Queues Required: No 00:34:02.376 Arbitration Mechanisms Supported 00:34:02.376 Weighted Round Robin: Not Supported 00:34:02.376 Vendor Specific: Not Supported 00:34:02.376 Reset Timeout: 7500 ms 00:34:02.376 Doorbell Stride: 4 bytes 00:34:02.376 NVM Subsystem Reset: Not Supported 00:34:02.376 Command Sets Supported 00:34:02.376 NVM Command Set: Supported 00:34:02.376 Boot Partition: Not Supported 00:34:02.376 Memory Page Size Minimum: 4096 bytes 00:34:02.376 Memory Page Size Maximum: 4096 bytes 00:34:02.376 Persistent Memory Region: Not Supported 00:34:02.376 Optional Asynchronous Events Supported 00:34:02.376 Namespace Attribute Notices: Not Supported 00:34:02.376 Firmware Activation Notices: Not Supported 00:34:02.376 ANA Change Notices: Not Supported 00:34:02.376 PLE Aggregate Log Change Notices: Not Supported 00:34:02.376 LBA Status Info Alert Notices: Not Supported 00:34:02.376 EGE Aggregate Log Change Notices: Not Supported 00:34:02.376 Normal NVM Subsystem Shutdown event: Not Supported 00:34:02.376 Zone Descriptor Change Notices: Not Supported 00:34:02.376 Discovery Log Change Notices: Supported 00:34:02.376 Controller Attributes 00:34:02.376 128-bit Host Identifier: Not Supported 00:34:02.376 Non-Operational Permissive Mode: Not Supported 00:34:02.376 NVM Sets: Not Supported 00:34:02.376 Read Recovery Levels: Not Supported 00:34:02.376 Endurance Groups: Not Supported 00:34:02.376 Predictable Latency Mode: Not Supported 00:34:02.376 Traffic Based Keep ALive: Not Supported 00:34:02.376 Namespace Granularity: Not Supported 00:34:02.376 SQ Associations: Not Supported 00:34:02.376 UUID List: Not Supported 00:34:02.376 Multi-Domain Subsystem: Not Supported 00:34:02.376 Fixed Capacity Management: Not Supported 00:34:02.376 Variable Capacity Management: Not Supported 00:34:02.376 Delete Endurance Group: Not Supported 00:34:02.376 Delete NVM Set: Not Supported 00:34:02.376 Extended LBA Formats Supported: Not Supported 00:34:02.376 Flexible Data Placement Supported: Not Supported 00:34:02.376 00:34:02.376 Controller Memory Buffer Support 00:34:02.376 ================================ 00:34:02.376 Supported: No 00:34:02.376 00:34:02.376 Persistent Memory Region Support 00:34:02.376 ================================ 00:34:02.376 Supported: No 00:34:02.376 00:34:02.376 Admin Command Set Attributes 00:34:02.376 ============================ 00:34:02.376 Security Send/Receive: Not Supported 00:34:02.376 Format NVM: Not Supported 00:34:02.376 Firmware Activate/Download: Not Supported 00:34:02.376 Namespace Management: Not Supported 00:34:02.376 Device Self-Test: Not Supported 00:34:02.376 Directives: Not Supported 00:34:02.376 NVMe-MI: Not Supported 00:34:02.376 Virtualization Management: Not Supported 00:34:02.376 Doorbell Buffer Config: Not Supported 00:34:02.376 Get LBA Status Capability: Not Supported 00:34:02.376 Command & Feature Lockdown Capability: Not Supported 00:34:02.376 Abort Command Limit: 1 00:34:02.376 Async Event Request Limit: 1 00:34:02.376 Number of Firmware Slots: N/A 00:34:02.377 Firmware Slot 1 Read-Only: N/A 00:34:02.377 Firmware Activation Without Reset: N/A 00:34:02.377 Multiple Update Detection Support: N/A 00:34:02.377 Firmware Update Granularity: No Information Provided 00:34:02.377 Per-Namespace SMART Log: No 00:34:02.377 Asymmetric Namespace Access Log Page: Not Supported 00:34:02.377 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:02.377 Command Effects Log Page: Not Supported 00:34:02.377 Get Log Page Extended Data: Supported 00:34:02.377 Telemetry Log Pages: Not Supported 00:34:02.377 Persistent Event Log Pages: Not Supported 00:34:02.377 Supported Log Pages Log Page: May Support 00:34:02.377 Commands Supported & Effects Log Page: Not Supported 00:34:02.377 Feature Identifiers & Effects Log Page:May Support 00:34:02.377 NVMe-MI Commands & Effects Log Page: May Support 00:34:02.377 Data Area 4 for Telemetry Log: Not Supported 00:34:02.377 Error Log Page Entries Supported: 1 00:34:02.377 Keep Alive: Not Supported 00:34:02.377 00:34:02.377 NVM Command Set Attributes 00:34:02.377 ========================== 00:34:02.377 Submission Queue Entry Size 00:34:02.377 Max: 1 00:34:02.377 Min: 1 00:34:02.377 Completion Queue Entry Size 00:34:02.377 Max: 1 00:34:02.377 Min: 1 00:34:02.377 Number of Namespaces: 0 00:34:02.377 Compare Command: Not Supported 00:34:02.377 Write Uncorrectable Command: Not Supported 00:34:02.377 Dataset Management Command: Not Supported 00:34:02.377 Write Zeroes Command: Not Supported 00:34:02.377 Set Features Save Field: Not Supported 00:34:02.377 Reservations: Not Supported 00:34:02.377 Timestamp: Not Supported 00:34:02.377 Copy: Not Supported 00:34:02.377 Volatile Write Cache: Not Present 00:34:02.377 Atomic Write Unit (Normal): 1 00:34:02.377 Atomic Write Unit (PFail): 1 00:34:02.377 Atomic Compare & Write Unit: 1 00:34:02.377 Fused Compare & Write: Not Supported 00:34:02.377 Scatter-Gather List 00:34:02.377 SGL Command Set: Supported 00:34:02.377 SGL Keyed: Not Supported 00:34:02.377 SGL Bit Bucket Descriptor: Not Supported 00:34:02.377 SGL Metadata Pointer: Not Supported 00:34:02.377 Oversized SGL: Not Supported 00:34:02.377 SGL Metadata Address: Not Supported 00:34:02.377 SGL Offset: Supported 00:34:02.377 Transport SGL Data Block: Not Supported 00:34:02.377 Replay Protected Memory Block: Not Supported 00:34:02.377 00:34:02.377 Firmware Slot Information 00:34:02.377 ========================= 00:34:02.377 Active slot: 0 00:34:02.377 00:34:02.377 00:34:02.377 Error Log 00:34:02.377 ========= 00:34:02.377 00:34:02.377 Active Namespaces 00:34:02.377 ================= 00:34:02.377 Discovery Log Page 00:34:02.377 ================== 00:34:02.377 Generation Counter: 2 00:34:02.377 Number of Records: 2 00:34:02.377 Record Format: 0 00:34:02.377 00:34:02.377 Discovery Log Entry 0 00:34:02.377 ---------------------- 00:34:02.377 Transport Type: 3 (TCP) 00:34:02.377 Address Family: 1 (IPv4) 00:34:02.377 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:02.377 Entry Flags: 00:34:02.377 Duplicate Returned Information: 0 00:34:02.377 Explicit Persistent Connection Support for Discovery: 0 00:34:02.377 Transport Requirements: 00:34:02.377 Secure Channel: Not Specified 00:34:02.377 Port ID: 1 (0x0001) 00:34:02.377 Controller ID: 65535 (0xffff) 00:34:02.377 Admin Max SQ Size: 32 00:34:02.377 Transport Service Identifier: 4420 00:34:02.377 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:02.377 Transport Address: 10.0.0.1 00:34:02.377 Discovery Log Entry 1 00:34:02.377 ---------------------- 00:34:02.377 Transport Type: 3 (TCP) 00:34:02.377 Address Family: 1 (IPv4) 00:34:02.377 Subsystem Type: 2 (NVM Subsystem) 00:34:02.377 Entry Flags: 00:34:02.377 Duplicate Returned Information: 0 00:34:02.377 Explicit Persistent Connection Support for Discovery: 0 00:34:02.377 Transport Requirements: 00:34:02.377 Secure Channel: Not Specified 00:34:02.377 Port ID: 1 (0x0001) 00:34:02.377 Controller ID: 65535 (0xffff) 00:34:02.377 Admin Max SQ Size: 32 00:34:02.377 Transport Service Identifier: 4420 00:34:02.377 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:02.377 Transport Address: 10.0.0.1 00:34:02.377 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:02.636 get_feature(0x01) failed 00:34:02.636 get_feature(0x02) failed 00:34:02.636 get_feature(0x04) failed 00:34:02.636 ===================================================== 00:34:02.636 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:02.636 ===================================================== 00:34:02.636 Controller Capabilities/Features 00:34:02.636 ================================ 00:34:02.636 Vendor ID: 0000 00:34:02.636 Subsystem Vendor ID: 0000 00:34:02.636 Serial Number: 9d064d1d2909d268d081 00:34:02.636 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:02.636 Firmware Version: 6.8.9-20 00:34:02.636 Recommended Arb Burst: 6 00:34:02.636 IEEE OUI Identifier: 00 00 00 00:34:02.636 Multi-path I/O 00:34:02.636 May have multiple subsystem ports: Yes 00:34:02.636 May have multiple controllers: Yes 00:34:02.636 Associated with SR-IOV VF: No 00:34:02.636 Max Data Transfer Size: Unlimited 00:34:02.636 Max Number of Namespaces: 1024 00:34:02.636 Max Number of I/O Queues: 128 00:34:02.636 NVMe Specification Version (VS): 1.3 00:34:02.636 NVMe Specification Version (Identify): 1.3 00:34:02.636 Maximum Queue Entries: 1024 00:34:02.636 Contiguous Queues Required: No 00:34:02.636 Arbitration Mechanisms Supported 00:34:02.636 Weighted Round Robin: Not Supported 00:34:02.636 Vendor Specific: Not Supported 00:34:02.636 Reset Timeout: 7500 ms 00:34:02.636 Doorbell Stride: 4 bytes 00:34:02.636 NVM Subsystem Reset: Not Supported 00:34:02.636 Command Sets Supported 00:34:02.636 NVM Command Set: Supported 00:34:02.636 Boot Partition: Not Supported 00:34:02.636 Memory Page Size Minimum: 4096 bytes 00:34:02.636 Memory Page Size Maximum: 4096 bytes 00:34:02.636 Persistent Memory Region: Not Supported 00:34:02.636 Optional Asynchronous Events Supported 00:34:02.636 Namespace Attribute Notices: Supported 00:34:02.636 Firmware Activation Notices: Not Supported 00:34:02.636 ANA Change Notices: Supported 00:34:02.636 PLE Aggregate Log Change Notices: Not Supported 00:34:02.636 LBA Status Info Alert Notices: Not Supported 00:34:02.636 EGE Aggregate Log Change Notices: Not Supported 00:34:02.636 Normal NVM Subsystem Shutdown event: Not Supported 00:34:02.636 Zone Descriptor Change Notices: Not Supported 00:34:02.636 Discovery Log Change Notices: Not Supported 00:34:02.636 Controller Attributes 00:34:02.636 128-bit Host Identifier: Supported 00:34:02.636 Non-Operational Permissive Mode: Not Supported 00:34:02.636 NVM Sets: Not Supported 00:34:02.636 Read Recovery Levels: Not Supported 00:34:02.636 Endurance Groups: Not Supported 00:34:02.636 Predictable Latency Mode: Not Supported 00:34:02.636 Traffic Based Keep ALive: Supported 00:34:02.637 Namespace Granularity: Not Supported 00:34:02.637 SQ Associations: Not Supported 00:34:02.637 UUID List: Not Supported 00:34:02.637 Multi-Domain Subsystem: Not Supported 00:34:02.637 Fixed Capacity Management: Not Supported 00:34:02.637 Variable Capacity Management: Not Supported 00:34:02.637 Delete Endurance Group: Not Supported 00:34:02.637 Delete NVM Set: Not Supported 00:34:02.637 Extended LBA Formats Supported: Not Supported 00:34:02.637 Flexible Data Placement Supported: Not Supported 00:34:02.637 00:34:02.637 Controller Memory Buffer Support 00:34:02.637 ================================ 00:34:02.637 Supported: No 00:34:02.637 00:34:02.637 Persistent Memory Region Support 00:34:02.637 ================================ 00:34:02.637 Supported: No 00:34:02.637 00:34:02.637 Admin Command Set Attributes 00:34:02.637 ============================ 00:34:02.637 Security Send/Receive: Not Supported 00:34:02.637 Format NVM: Not Supported 00:34:02.637 Firmware Activate/Download: Not Supported 00:34:02.637 Namespace Management: Not Supported 00:34:02.637 Device Self-Test: Not Supported 00:34:02.637 Directives: Not Supported 00:34:02.637 NVMe-MI: Not Supported 00:34:02.637 Virtualization Management: Not Supported 00:34:02.637 Doorbell Buffer Config: Not Supported 00:34:02.637 Get LBA Status Capability: Not Supported 00:34:02.637 Command & Feature Lockdown Capability: Not Supported 00:34:02.637 Abort Command Limit: 4 00:34:02.637 Async Event Request Limit: 4 00:34:02.637 Number of Firmware Slots: N/A 00:34:02.637 Firmware Slot 1 Read-Only: N/A 00:34:02.637 Firmware Activation Without Reset: N/A 00:34:02.637 Multiple Update Detection Support: N/A 00:34:02.637 Firmware Update Granularity: No Information Provided 00:34:02.637 Per-Namespace SMART Log: Yes 00:34:02.637 Asymmetric Namespace Access Log Page: Supported 00:34:02.637 ANA Transition Time : 10 sec 00:34:02.637 00:34:02.637 Asymmetric Namespace Access Capabilities 00:34:02.637 ANA Optimized State : Supported 00:34:02.637 ANA Non-Optimized State : Supported 00:34:02.637 ANA Inaccessible State : Supported 00:34:02.637 ANA Persistent Loss State : Supported 00:34:02.637 ANA Change State : Supported 00:34:02.637 ANAGRPID is not changed : No 00:34:02.637 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:02.637 00:34:02.637 ANA Group Identifier Maximum : 128 00:34:02.637 Number of ANA Group Identifiers : 128 00:34:02.637 Max Number of Allowed Namespaces : 1024 00:34:02.637 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:02.637 Command Effects Log Page: Supported 00:34:02.637 Get Log Page Extended Data: Supported 00:34:02.637 Telemetry Log Pages: Not Supported 00:34:02.637 Persistent Event Log Pages: Not Supported 00:34:02.637 Supported Log Pages Log Page: May Support 00:34:02.637 Commands Supported & Effects Log Page: Not Supported 00:34:02.637 Feature Identifiers & Effects Log Page:May Support 00:34:02.637 NVMe-MI Commands & Effects Log Page: May Support 00:34:02.637 Data Area 4 for Telemetry Log: Not Supported 00:34:02.637 Error Log Page Entries Supported: 128 00:34:02.637 Keep Alive: Supported 00:34:02.637 Keep Alive Granularity: 1000 ms 00:34:02.637 00:34:02.637 NVM Command Set Attributes 00:34:02.637 ========================== 00:34:02.637 Submission Queue Entry Size 00:34:02.637 Max: 64 00:34:02.637 Min: 64 00:34:02.637 Completion Queue Entry Size 00:34:02.637 Max: 16 00:34:02.637 Min: 16 00:34:02.637 Number of Namespaces: 1024 00:34:02.637 Compare Command: Not Supported 00:34:02.637 Write Uncorrectable Command: Not Supported 00:34:02.637 Dataset Management Command: Supported 00:34:02.637 Write Zeroes Command: Supported 00:34:02.637 Set Features Save Field: Not Supported 00:34:02.637 Reservations: Not Supported 00:34:02.637 Timestamp: Not Supported 00:34:02.637 Copy: Not Supported 00:34:02.637 Volatile Write Cache: Present 00:34:02.637 Atomic Write Unit (Normal): 1 00:34:02.637 Atomic Write Unit (PFail): 1 00:34:02.637 Atomic Compare & Write Unit: 1 00:34:02.637 Fused Compare & Write: Not Supported 00:34:02.637 Scatter-Gather List 00:34:02.637 SGL Command Set: Supported 00:34:02.637 SGL Keyed: Not Supported 00:34:02.637 SGL Bit Bucket Descriptor: Not Supported 00:34:02.637 SGL Metadata Pointer: Not Supported 00:34:02.637 Oversized SGL: Not Supported 00:34:02.637 SGL Metadata Address: Not Supported 00:34:02.637 SGL Offset: Supported 00:34:02.637 Transport SGL Data Block: Not Supported 00:34:02.637 Replay Protected Memory Block: Not Supported 00:34:02.637 00:34:02.637 Firmware Slot Information 00:34:02.637 ========================= 00:34:02.637 Active slot: 0 00:34:02.637 00:34:02.637 Asymmetric Namespace Access 00:34:02.637 =========================== 00:34:02.637 Change Count : 0 00:34:02.637 Number of ANA Group Descriptors : 1 00:34:02.637 ANA Group Descriptor : 0 00:34:02.637 ANA Group ID : 1 00:34:02.637 Number of NSID Values : 1 00:34:02.637 Change Count : 0 00:34:02.637 ANA State : 1 00:34:02.637 Namespace Identifier : 1 00:34:02.637 00:34:02.637 Commands Supported and Effects 00:34:02.637 ============================== 00:34:02.637 Admin Commands 00:34:02.637 -------------- 00:34:02.637 Get Log Page (02h): Supported 00:34:02.637 Identify (06h): Supported 00:34:02.637 Abort (08h): Supported 00:34:02.637 Set Features (09h): Supported 00:34:02.637 Get Features (0Ah): Supported 00:34:02.637 Asynchronous Event Request (0Ch): Supported 00:34:02.637 Keep Alive (18h): Supported 00:34:02.637 I/O Commands 00:34:02.637 ------------ 00:34:02.637 Flush (00h): Supported 00:34:02.637 Write (01h): Supported LBA-Change 00:34:02.637 Read (02h): Supported 00:34:02.637 Write Zeroes (08h): Supported LBA-Change 00:34:02.637 Dataset Management (09h): Supported 00:34:02.637 00:34:02.637 Error Log 00:34:02.637 ========= 00:34:02.637 Entry: 0 00:34:02.637 Error Count: 0x3 00:34:02.637 Submission Queue Id: 0x0 00:34:02.637 Command Id: 0x5 00:34:02.637 Phase Bit: 0 00:34:02.637 Status Code: 0x2 00:34:02.637 Status Code Type: 0x0 00:34:02.637 Do Not Retry: 1 00:34:02.637 Error Location: 0x28 00:34:02.637 LBA: 0x0 00:34:02.637 Namespace: 0x0 00:34:02.637 Vendor Log Page: 0x0 00:34:02.637 ----------- 00:34:02.637 Entry: 1 00:34:02.637 Error Count: 0x2 00:34:02.637 Submission Queue Id: 0x0 00:34:02.637 Command Id: 0x5 00:34:02.637 Phase Bit: 0 00:34:02.637 Status Code: 0x2 00:34:02.637 Status Code Type: 0x0 00:34:02.637 Do Not Retry: 1 00:34:02.637 Error Location: 0x28 00:34:02.637 LBA: 0x0 00:34:02.637 Namespace: 0x0 00:34:02.637 Vendor Log Page: 0x0 00:34:02.637 ----------- 00:34:02.637 Entry: 2 00:34:02.637 Error Count: 0x1 00:34:02.637 Submission Queue Id: 0x0 00:34:02.637 Command Id: 0x4 00:34:02.637 Phase Bit: 0 00:34:02.637 Status Code: 0x2 00:34:02.637 Status Code Type: 0x0 00:34:02.637 Do Not Retry: 1 00:34:02.637 Error Location: 0x28 00:34:02.637 LBA: 0x0 00:34:02.637 Namespace: 0x0 00:34:02.637 Vendor Log Page: 0x0 00:34:02.637 00:34:02.637 Number of Queues 00:34:02.637 ================ 00:34:02.637 Number of I/O Submission Queues: 128 00:34:02.637 Number of I/O Completion Queues: 128 00:34:02.637 00:34:02.637 ZNS Specific Controller Data 00:34:02.637 ============================ 00:34:02.637 Zone Append Size Limit: 0 00:34:02.637 00:34:02.637 00:34:02.637 Active Namespaces 00:34:02.637 ================= 00:34:02.637 get_feature(0x05) failed 00:34:02.637 Namespace ID:1 00:34:02.637 Command Set Identifier: NVM (00h) 00:34:02.637 Deallocate: Supported 00:34:02.637 Deallocated/Unwritten Error: Not Supported 00:34:02.637 Deallocated Read Value: Unknown 00:34:02.637 Deallocate in Write Zeroes: Not Supported 00:34:02.637 Deallocated Guard Field: 0xFFFF 00:34:02.637 Flush: Supported 00:34:02.637 Reservation: Not Supported 00:34:02.637 Namespace Sharing Capabilities: Multiple Controllers 00:34:02.637 Size (in LBAs): 1953525168 (931GiB) 00:34:02.637 Capacity (in LBAs): 1953525168 (931GiB) 00:34:02.637 Utilization (in LBAs): 1953525168 (931GiB) 00:34:02.637 UUID: 8f508fee-93cd-4e95-93e0-e4f9d5a5ac18 00:34:02.637 Thin Provisioning: Not Supported 00:34:02.637 Per-NS Atomic Units: Yes 00:34:02.637 Atomic Boundary Size (Normal): 0 00:34:02.637 Atomic Boundary Size (PFail): 0 00:34:02.637 Atomic Boundary Offset: 0 00:34:02.637 NGUID/EUI64 Never Reused: No 00:34:02.637 ANA group ID: 1 00:34:02.638 Namespace Write Protected: No 00:34:02.638 Number of LBA Formats: 1 00:34:02.638 Current LBA Format: LBA Format #00 00:34:02.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:02.638 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.638 rmmod nvme_tcp 00:34:02.638 rmmod nvme_fabrics 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.638 01:51:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:04.546 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:04.804 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:04.804 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:34:04.804 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:34:04.804 01:51:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:06.181 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:06.181 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:06.181 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:07.115 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:07.115 00:34:07.115 real 0m9.528s 00:34:07.115 user 0m2.090s 00:34:07.115 sys 0m3.465s 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.115 ************************************ 00:34:07.115 END TEST nvmf_identify_kernel_target 00:34:07.115 ************************************ 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.115 ************************************ 00:34:07.115 START TEST nvmf_auth_host 00:34:07.115 ************************************ 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:07.115 * Looking for test storage... 00:34:07.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:34:07.115 01:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:07.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.377 --rc genhtml_branch_coverage=1 00:34:07.377 --rc genhtml_function_coverage=1 00:34:07.377 --rc genhtml_legend=1 00:34:07.377 --rc geninfo_all_blocks=1 00:34:07.377 --rc geninfo_unexecuted_blocks=1 00:34:07.377 00:34:07.377 ' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:07.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.377 --rc genhtml_branch_coverage=1 00:34:07.377 --rc genhtml_function_coverage=1 00:34:07.377 --rc genhtml_legend=1 00:34:07.377 --rc geninfo_all_blocks=1 00:34:07.377 --rc geninfo_unexecuted_blocks=1 00:34:07.377 00:34:07.377 ' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:07.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.377 --rc genhtml_branch_coverage=1 00:34:07.377 --rc genhtml_function_coverage=1 00:34:07.377 --rc genhtml_legend=1 00:34:07.377 --rc geninfo_all_blocks=1 00:34:07.377 --rc geninfo_unexecuted_blocks=1 00:34:07.377 00:34:07.377 ' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:07.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.377 --rc genhtml_branch_coverage=1 00:34:07.377 --rc genhtml_function_coverage=1 00:34:07.377 --rc genhtml_legend=1 00:34:07.377 --rc geninfo_all_blocks=1 00:34:07.377 --rc geninfo_unexecuted_blocks=1 00:34:07.377 00:34:07.377 ' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.377 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:07.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.378 01:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.285 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:09.286 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:09.286 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:09.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:09.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.286 01:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:34:09.286 00:34:09.286 --- 10.0.0.2 ping statistics --- 00:34:09.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.286 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:34:09.286 00:34:09.286 --- 10.0.0.1 ping statistics --- 00:34:09.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.286 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:09.286 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=1044354 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 1044354 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1044354 ']' 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:09.287 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2c0d158909d024acc8828dca10581136 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.13j 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2c0d158909d024acc8828dca10581136 0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2c0d158909d024acc8828dca10581136 0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2c0d158909d024acc8828dca10581136 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.13j 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.13j 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.13j 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9d0fe3d0c0db1fa77a4b6248482d673d3880afa368d00dfb525f85eb7c91426e 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.yzz 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9d0fe3d0c0db1fa77a4b6248482d673d3880afa368d00dfb525f85eb7c91426e 3 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9d0fe3d0c0db1fa77a4b6248482d673d3880afa368d00dfb525f85eb7c91426e 3 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9d0fe3d0c0db1fa77a4b6248482d673d3880afa368d00dfb525f85eb7c91426e 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.yzz 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.yzz 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.yzz 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4dc911c69edbca61c4f350c437429518ca652c503739faa2 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.9AM 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4dc911c69edbca61c4f350c437429518ca652c503739faa2 0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4dc911c69edbca61c4f350c437429518ca652c503739faa2 0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4dc911c69edbca61c4f350c437429518ca652c503739faa2 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.9AM 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.9AM 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9AM 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:09.855 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e61667308a50c1bbbf6e7eb743403934dcfbee778b3dacf2 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.noT 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e61667308a50c1bbbf6e7eb743403934dcfbee778b3dacf2 2 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e61667308a50c1bbbf6e7eb743403934dcfbee778b3dacf2 2 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e61667308a50c1bbbf6e7eb743403934dcfbee778b3dacf2 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:09.856 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.noT 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.noT 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.noT 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b9291a86c731e45308df8f4520bce9e6 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.2Ta 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b9291a86c731e45308df8f4520bce9e6 1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b9291a86c731e45308df8f4520bce9e6 1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b9291a86c731e45308df8f4520bce9e6 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.2Ta 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.2Ta 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2Ta 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f68f123b959989e50b7a12ea397e89aa 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.68w 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f68f123b959989e50b7a12ea397e89aa 1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f68f123b959989e50b7a12ea397e89aa 1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f68f123b959989e50b7a12ea397e89aa 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.68w 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.68w 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.68w 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=88f1dc8aa93e3da966c4660d7ce6b0cfb7ddae9cc8cf844c 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.XnC 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 88f1dc8aa93e3da966c4660d7ce6b0cfb7ddae9cc8cf844c 2 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 88f1dc8aa93e3da966c4660d7ce6b0cfb7ddae9cc8cf844c 2 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=88f1dc8aa93e3da966c4660d7ce6b0cfb7ddae9cc8cf844c 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.XnC 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.XnC 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.XnC 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=39312bdfaa5f2af1f4a004d8cd418a10 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Z8F 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 39312bdfaa5f2af1f4a004d8cd418a10 0 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 39312bdfaa5f2af1f4a004d8cd418a10 0 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:10.116 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=39312bdfaa5f2af1f4a004d8cd418a10 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Z8F 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Z8F 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Z8F 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=325abbad214e57c4fa27d0a20d8e355132a136a3558262a6bd52523760916088 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.QA4 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 325abbad214e57c4fa27d0a20d8e355132a136a3558262a6bd52523760916088 3 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 325abbad214e57c4fa27d0a20d8e355132a136a3558262a6bd52523760916088 3 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=325abbad214e57c4fa27d0a20d8e355132a136a3558262a6bd52523760916088 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.QA4 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.QA4 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.QA4 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1044354 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1044354 ']' 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:10.117 01:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.13j 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.yzz ]] 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yzz 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9AM 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.683 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.noT ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.noT 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2Ta 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.68w ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.68w 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.XnC 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Z8F ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Z8F 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.QA4 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:10.684 01:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:11.621 Waiting for block devices as requested 00:34:11.621 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:11.903 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:11.903 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:12.194 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:12.194 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:12.194 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:12.452 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:12.452 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:12.452 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:12.452 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:12.452 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:12.712 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:12.712 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:12.712 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:12.712 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:12.970 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:12.970 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:13.228 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:13.488 No valid GPT data, bailing 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:13.488 00:34:13.488 Discovery Log Number of Records 2, Generation counter 2 00:34:13.488 =====Discovery Log Entry 0====== 00:34:13.488 trtype: tcp 00:34:13.488 adrfam: ipv4 00:34:13.488 subtype: current discovery subsystem 00:34:13.488 treq: not specified, sq flow control disable supported 00:34:13.488 portid: 1 00:34:13.488 trsvcid: 4420 00:34:13.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:13.488 traddr: 10.0.0.1 00:34:13.488 eflags: none 00:34:13.488 sectype: none 00:34:13.488 =====Discovery Log Entry 1====== 00:34:13.488 trtype: tcp 00:34:13.488 adrfam: ipv4 00:34:13.488 subtype: nvme subsystem 00:34:13.488 treq: not specified, sq flow control disable supported 00:34:13.488 portid: 1 00:34:13.488 trsvcid: 4420 00:34:13.488 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:13.488 traddr: 10.0.0.1 00:34:13.488 eflags: none 00:34:13.488 sectype: none 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.488 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.748 nvme0n1 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.748 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.009 nvme0n1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.009 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.267 nvme0n1 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.267 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.268 01:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 nvme0n1 00:34:14.268 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.268 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.268 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.268 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.268 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.526 nvme0n1 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.526 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.784 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.785 nvme0n1 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.785 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.043 nvme0n1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.043 01:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.300 nvme0n1 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:15.300 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.301 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.559 nvme0n1 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:15.559 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.560 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.819 nvme0n1 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.819 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.820 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.079 nvme0n1 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.079 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.338 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.338 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.338 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.338 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.338 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.339 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.339 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.339 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.339 01:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.599 nvme0n1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.599 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.859 nvme0n1 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.859 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.860 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.118 nvme0n1 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.118 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.377 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.378 01:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.637 nvme0n1 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.637 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.895 nvme0n1 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.895 01:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.464 nvme0n1 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.464 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.465 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.723 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.290 nvme0n1 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.290 01:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.858 nvme0n1 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.858 01:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.429 nvme0n1 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.429 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.996 nvme0n1 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.996 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.997 01:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.938 nvme0n1 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.938 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.939 01:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.878 nvme0n1 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.878 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.136 01:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.070 nvme0n1 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.070 01:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.010 nvme0n1 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.010 01:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.949 nvme0n1 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.949 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.210 nvme0n1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.210 01:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.470 nvme0n1 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.470 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.471 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.471 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.471 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.731 nvme0n1 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.731 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.732 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.019 nvme0n1 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.019 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.020 nvme0n1 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.020 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.279 01:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.279 nvme0n1 00:34:27.279 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.279 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.279 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.279 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.279 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.538 nvme0n1 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.538 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.797 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.798 nvme0n1 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.798 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.056 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.057 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.316 nvme0n1 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.316 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.317 01:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.577 nvme0n1 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.577 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.838 nvme0n1 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.838 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.839 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 nvme0n1 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.099 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.357 01:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.617 nvme0n1 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.617 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.618 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.877 nvme0n1 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:29.877 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.878 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.137 nvme0n1 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.138 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.399 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.399 01:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.399 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.970 nvme0n1 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:30.970 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.971 01:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.542 nvme0n1 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.542 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.112 nvme0n1 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:32.112 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.113 01:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 nvme0n1 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.684 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.254 nvme0n1 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.254 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.255 01:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.192 nvme0n1 00:34:34.192 01:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.192 01:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.192 01:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.192 01:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.192 01:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.192 01:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:34.192 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:34.193 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:34.193 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.193 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.193 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.128 nvme0n1 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:35.128 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.129 01:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.064 nvme0n1 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.064 01:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.996 nvme0n1 00:34:36.996 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.996 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.996 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.996 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.997 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.997 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.997 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.997 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.997 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.997 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.256 01:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.194 nvme0n1 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.194 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.195 01:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.453 nvme0n1 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.453 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.454 nvme0n1 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.454 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.712 nvme0n1 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.712 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.971 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.972 nvme0n1 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.972 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.231 nvme0n1 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.231 01:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.231 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.232 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.232 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.232 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.232 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.232 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.232 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 nvme0n1 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.502 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.503 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.503 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.503 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.763 nvme0n1 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.763 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.039 nvme0n1 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.039 01:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.323 nvme0n1 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.323 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.324 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.324 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.324 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 nvme0n1 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.586 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.587 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.846 nvme0n1 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.846 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.106 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.365 nvme0n1 00:34:41.365 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.365 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.365 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.365 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.365 01:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.365 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.366 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.626 nvme0n1 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.626 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.884 nvme0n1 00:34:41.884 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.884 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.884 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.884 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.884 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.884 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.143 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.144 01:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.402 nvme0n1 00:34:42.402 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.402 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.403 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.969 nvme0n1 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:42.969 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.970 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.970 01:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.538 nvme0n1 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.538 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.106 nvme0n1 00:34:44.106 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.106 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.106 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.106 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.106 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.106 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.365 01:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.365 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.934 nvme0n1 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:44.934 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:44.935 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.935 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.935 01:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.505 nvme0n1 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmMwZDE1ODkwOWQwMjRhY2M4ODI4ZGNhMTA1ODExMzb3/mGp: 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWQwZmUzZDBjMGRiMWZhNzdhNGI2MjQ4NDgyZDY3M2QzODgwYWZhMzY4ZDAwZGZiNTI1Zjg1ZWI3YzkxNDI2ZTERZT4=: 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.505 01:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.442 nvme0n1 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.442 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.443 01:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.418 nvme0n1 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.418 01:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.355 nvme0n1 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODhmMWRjOGFhOTNlM2RhOTY2YzQ2NjBkN2NlNmIwY2ZiN2RkYWU5Y2M4Y2Y4NDRjGrGGKw==: 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzkzMTJiZGZhYTVmMmFmMWY0YTAwNGQ4Y2Q0MThhMTC8Ku3J: 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.355 01:52:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.294 nvme0n1 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI1YWJiYWQyMTRlNTdjNGZhMjdkMGEyMGQ4ZTM1NTEzMmExMzZhMzU1ODI2MmE2YmQ1MjUyMzc2MDkxNjA4OBN2Ac8=: 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.294 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.552 01:52:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.489 nvme0n1 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.489 request: 00:34:50.489 { 00:34:50.489 "name": "nvme0", 00:34:50.489 "trtype": "tcp", 00:34:50.489 "traddr": "10.0.0.1", 00:34:50.489 "adrfam": "ipv4", 00:34:50.489 "trsvcid": "4420", 00:34:50.489 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:50.489 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:50.489 "prchk_reftag": false, 00:34:50.489 "prchk_guard": false, 00:34:50.489 "hdgst": false, 00:34:50.489 "ddgst": false, 00:34:50.489 "allow_unrecognized_csi": false, 00:34:50.489 "method": "bdev_nvme_attach_controller", 00:34:50.489 "req_id": 1 00:34:50.489 } 00:34:50.489 Got JSON-RPC error response 00:34:50.489 response: 00:34:50.489 { 00:34:50.489 "code": -5, 00:34:50.489 "message": "Input/output error" 00:34:50.489 } 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.489 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.490 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 request: 00:34:50.750 { 00:34:50.750 "name": "nvme0", 00:34:50.750 "trtype": "tcp", 00:34:50.750 "traddr": "10.0.0.1", 00:34:50.750 "adrfam": "ipv4", 00:34:50.750 "trsvcid": "4420", 00:34:50.750 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:50.750 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:50.750 "prchk_reftag": false, 00:34:50.750 "prchk_guard": false, 00:34:50.750 "hdgst": false, 00:34:50.750 "ddgst": false, 00:34:50.750 "dhchap_key": "key2", 00:34:50.750 "allow_unrecognized_csi": false, 00:34:50.750 "method": "bdev_nvme_attach_controller", 00:34:50.750 "req_id": 1 00:34:50.750 } 00:34:50.750 Got JSON-RPC error response 00:34:50.750 response: 00:34:50.750 { 00:34:50.750 "code": -5, 00:34:50.750 "message": "Input/output error" 00:34:50.750 } 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 request: 00:34:50.750 { 00:34:50.750 "name": "nvme0", 00:34:50.750 "trtype": "tcp", 00:34:50.750 "traddr": "10.0.0.1", 00:34:50.750 "adrfam": "ipv4", 00:34:50.750 "trsvcid": "4420", 00:34:50.750 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:50.750 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:50.750 "prchk_reftag": false, 00:34:50.750 "prchk_guard": false, 00:34:50.750 "hdgst": false, 00:34:50.750 "ddgst": false, 00:34:50.750 "dhchap_key": "key1", 00:34:50.750 "dhchap_ctrlr_key": "ckey2", 00:34:50.750 "allow_unrecognized_csi": false, 00:34:50.750 "method": "bdev_nvme_attach_controller", 00:34:50.750 "req_id": 1 00:34:50.750 } 00:34:50.750 Got JSON-RPC error response 00:34:50.750 response: 00:34:50.750 { 00:34:50.750 "code": -5, 00:34:50.750 "message": "Input/output error" 00:34:50.750 } 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.750 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.009 nvme0n1 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:51.009 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:51.010 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.010 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.268 request: 00:34:51.268 { 00:34:51.268 "name": "nvme0", 00:34:51.268 "dhchap_key": "key1", 00:34:51.268 "dhchap_ctrlr_key": "ckey2", 00:34:51.268 "method": "bdev_nvme_set_keys", 00:34:51.268 "req_id": 1 00:34:51.268 } 00:34:51.268 Got JSON-RPC error response 00:34:51.268 response: 00:34:51.268 { 00:34:51.268 "code": -13, 00:34:51.268 "message": "Permission denied" 00:34:51.268 } 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:51.268 01:52:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:52.204 01:52:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:53.143 01:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.143 01:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:53.143 01:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.143 01:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.143 01:52:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRjOTExYzY5ZWRiY2E2MWM0ZjM1MGM0Mzc0Mjk1MThjYTY1MmM1MDM3MzlmYWEyh/BNhA==: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTYxNjY3MzA4YTUwYzFiYmJmNmU3ZWI3NDM0MDM5MzRkY2ZiZWU3NzhiM2RhY2YylmGFRA==: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.402 nvme0n1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjkyOTFhODZjNzMxZTQ1MzA4ZGY4ZjQ1MjBiY2U5ZTac2JzM: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY4ZjEyM2I5NTk5ODllNTBiN2ExMmVhMzk3ZTg5YWENA+RV: 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.402 request: 00:34:53.402 { 00:34:53.402 "name": "nvme0", 00:34:53.402 "dhchap_key": "key2", 00:34:53.402 "dhchap_ctrlr_key": "ckey1", 00:34:53.402 "method": "bdev_nvme_set_keys", 00:34:53.402 "req_id": 1 00:34:53.402 } 00:34:53.402 Got JSON-RPC error response 00:34:53.402 response: 00:34:53.402 { 00:34:53.402 "code": -13, 00:34:53.402 "message": "Permission denied" 00:34:53.402 } 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.402 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.661 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:53.661 01:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:54.601 rmmod nvme_tcp 00:34:54.601 rmmod nvme_fabrics 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 1044354 ']' 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 1044354 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1044354 ']' 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1044354 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044354 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044354' 00:34:54.601 killing process with pid 1044354 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1044354 00:34:54.601 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1044354 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:54.860 01:52:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:34:57.395 01:52:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.333 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:58.333 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:58.333 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:59.274 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:59.274 01:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.13j /tmp/spdk.key-null.9AM /tmp/spdk.key-sha256.2Ta /tmp/spdk.key-sha384.XnC /tmp/spdk.key-sha512.QA4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:59.274 01:52:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:00.653 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:00.653 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:00.653 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:00.653 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:00.653 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:00.653 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:00.653 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:00.653 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:00.653 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:00.653 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:00.653 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:00.653 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:00.653 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:00.653 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:00.653 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:00.653 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:00.653 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:00.653 00:35:00.653 real 0m53.462s 00:35:00.653 user 0m51.394s 00:35:00.653 sys 0m6.156s 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.653 ************************************ 00:35:00.653 END TEST nvmf_auth_host 00:35:00.653 ************************************ 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.653 ************************************ 00:35:00.653 START TEST nvmf_digest 00:35:00.653 ************************************ 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:00.653 * Looking for test storage... 00:35:00.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:35:00.653 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.914 --rc genhtml_branch_coverage=1 00:35:00.914 --rc genhtml_function_coverage=1 00:35:00.914 --rc genhtml_legend=1 00:35:00.914 --rc geninfo_all_blocks=1 00:35:00.914 --rc geninfo_unexecuted_blocks=1 00:35:00.914 00:35:00.914 ' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.914 --rc genhtml_branch_coverage=1 00:35:00.914 --rc genhtml_function_coverage=1 00:35:00.914 --rc genhtml_legend=1 00:35:00.914 --rc geninfo_all_blocks=1 00:35:00.914 --rc geninfo_unexecuted_blocks=1 00:35:00.914 00:35:00.914 ' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.914 --rc genhtml_branch_coverage=1 00:35:00.914 --rc genhtml_function_coverage=1 00:35:00.914 --rc genhtml_legend=1 00:35:00.914 --rc geninfo_all_blocks=1 00:35:00.914 --rc geninfo_unexecuted_blocks=1 00:35:00.914 00:35:00.914 ' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:00.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.914 --rc genhtml_branch_coverage=1 00:35:00.914 --rc genhtml_function_coverage=1 00:35:00.914 --rc genhtml_legend=1 00:35:00.914 --rc geninfo_all_blocks=1 00:35:00.914 --rc geninfo_unexecuted_blocks=1 00:35:00.914 00:35:00.914 ' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.914 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.915 01:52:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:02.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:02.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:02.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:02.819 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:02.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.820 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:03.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:35:03.078 00:35:03.078 --- 10.0.0.2 ping statistics --- 00:35:03.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.078 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:35:03.078 00:35:03.078 --- 10.0.0.1 ping statistics --- 00:35:03.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.078 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.078 ************************************ 00:35:03.078 START TEST nvmf_digest_clean 00:35:03.078 ************************************ 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=1054232 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 1054232 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1054232 ']' 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.078 01:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.078 [2024-10-01 01:52:42.780421] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:03.078 [2024-10-01 01:52:42.780512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.078 [2024-10-01 01:52:42.846544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.078 [2024-10-01 01:52:42.929643] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.078 [2024-10-01 01:52:42.929697] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.078 [2024-10-01 01:52:42.929720] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.078 [2024-10-01 01:52:42.929732] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.078 [2024-10-01 01:52:42.929741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.079 [2024-10-01 01:52:42.929774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.337 null0 00:35:03.337 [2024-10-01 01:52:43.156626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.337 [2024-10-01 01:52:43.180890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1054260 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1054260 /var/tmp/bperf.sock 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1054260 ']' 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.337 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.594 [2024-10-01 01:52:43.235062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:03.594 [2024-10-01 01:52:43.235139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054260 ] 00:35:03.594 [2024-10-01 01:52:43.304705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.594 [2024-10-01 01:52:43.398678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.851 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.851 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:03.852 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:03.852 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:03.852 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:04.109 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.109 01:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.675 nvme0n1 00:35:04.675 01:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:04.675 01:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.675 Running I/O for 2 seconds... 00:35:06.553 16935.00 IOPS, 66.15 MiB/s 17378.00 IOPS, 67.88 MiB/s 00:35:06.553 Latency(us) 00:35:06.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:06.553 nvme0n1 : 2.01 17388.54 67.92 0.00 0.00 7351.12 3762.25 16602.45 00:35:06.553 =================================================================================================================== 00:35:06.553 Total : 17388.54 67.92 0.00 0.00 7351.12 3762.25 16602.45 00:35:06.553 { 00:35:06.553 "results": [ 00:35:06.553 { 00:35:06.553 "job": "nvme0n1", 00:35:06.553 "core_mask": "0x2", 00:35:06.553 "workload": "randread", 00:35:06.553 "status": "finished", 00:35:06.553 "queue_depth": 128, 00:35:06.553 "io_size": 4096, 00:35:06.553 "runtime": 2.006149, 00:35:06.553 "iops": 17388.538937038076, 00:35:06.553 "mibps": 67.92398022280499, 00:35:06.553 "io_failed": 0, 00:35:06.553 "io_timeout": 0, 00:35:06.553 "avg_latency_us": 7351.1185992092305, 00:35:06.553 "min_latency_us": 3762.251851851852, 00:35:06.553 "max_latency_us": 16602.453333333335 00:35:06.553 } 00:35:06.553 ], 00:35:06.553 "core_count": 1 00:35:06.553 } 00:35:06.815 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:06.815 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:06.815 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:06.815 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:06.815 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:06.815 | select(.opcode=="crc32c") 00:35:06.815 | "\(.module_name) \(.executed)"' 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1054260 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1054260 ']' 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1054260 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1054260 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1054260' 00:35:07.073 killing process with pid 1054260 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1054260 00:35:07.073 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.073 00:35:07.073 Latency(us) 00:35:07.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.073 =================================================================================================================== 00:35:07.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.073 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1054260 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:07.331 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1054703 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1054703 /var/tmp/bperf.sock 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1054703 ']' 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.332 01:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:07.332 [2024-10-01 01:52:46.989686] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:07.332 [2024-10-01 01:52:46.989778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054703 ] 00:35:07.332 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:07.332 Zero copy mechanism will not be used. 00:35:07.332 [2024-10-01 01:52:47.055911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.332 [2024-10-01 01:52:47.150698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.589 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:07.589 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:07.590 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:07.590 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:07.590 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:07.847 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.847 01:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.415 nvme0n1 00:35:08.416 01:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:08.416 01:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.416 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:08.416 Zero copy mechanism will not be used. 00:35:08.416 Running I/O for 2 seconds... 00:35:10.728 4018.00 IOPS, 502.25 MiB/s 3995.00 IOPS, 499.38 MiB/s 00:35:10.728 Latency(us) 00:35:10.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.728 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:10.728 nvme0n1 : 2.00 3994.51 499.31 0.00 0.00 4000.97 1401.74 7378.87 00:35:10.728 =================================================================================================================== 00:35:10.728 Total : 3994.51 499.31 0.00 0.00 4000.97 1401.74 7378.87 00:35:10.728 { 00:35:10.728 "results": [ 00:35:10.728 { 00:35:10.728 "job": "nvme0n1", 00:35:10.728 "core_mask": "0x2", 00:35:10.728 "workload": "randread", 00:35:10.728 "status": "finished", 00:35:10.728 "queue_depth": 16, 00:35:10.728 "io_size": 131072, 00:35:10.728 "runtime": 2.004249, 00:35:10.728 "iops": 3994.513655738384, 00:35:10.728 "mibps": 499.314206967298, 00:35:10.728 "io_failed": 0, 00:35:10.728 "io_timeout": 0, 00:35:10.728 "avg_latency_us": 4000.9674640316057, 00:35:10.728 "min_latency_us": 1401.7422222222222, 00:35:10.728 "max_latency_us": 7378.868148148148 00:35:10.728 } 00:35:10.728 ], 00:35:10.728 "core_count": 1 00:35:10.728 } 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:10.728 | select(.opcode=="crc32c") 00:35:10.728 | "\(.module_name) \(.executed)"' 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1054703 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1054703 ']' 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1054703 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1054703 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1054703' 00:35:10.728 killing process with pid 1054703 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1054703 00:35:10.728 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.728 00:35:10.728 Latency(us) 00:35:10.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.728 =================================================================================================================== 00:35:10.728 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.728 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1054703 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1055193 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1055193 /var/tmp/bperf.sock 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1055193 ']' 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:10.993 01:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.993 [2024-10-01 01:52:50.795596] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:10.993 [2024-10-01 01:52:50.795685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055193 ] 00:35:11.251 [2024-10-01 01:52:50.855418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.251 [2024-10-01 01:52:50.939712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.251 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:11.251 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:11.251 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:11.251 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:11.251 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:11.822 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.822 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.083 nvme0n1 00:35:12.083 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:12.083 01:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.083 Running I/O for 2 seconds... 00:35:14.015 19444.00 IOPS, 75.95 MiB/s 19950.50 IOPS, 77.93 MiB/s 00:35:14.015 Latency(us) 00:35:14.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.015 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.015 nvme0n1 : 2.01 19949.49 77.93 0.00 0.00 6405.59 3325.35 13592.65 00:35:14.015 =================================================================================================================== 00:35:14.015 Total : 19949.49 77.93 0.00 0.00 6405.59 3325.35 13592.65 00:35:14.015 { 00:35:14.015 "results": [ 00:35:14.015 { 00:35:14.015 "job": "nvme0n1", 00:35:14.015 "core_mask": "0x2", 00:35:14.015 "workload": "randwrite", 00:35:14.015 "status": "finished", 00:35:14.015 "queue_depth": 128, 00:35:14.015 "io_size": 4096, 00:35:14.015 "runtime": 2.009726, 00:35:14.015 "iops": 19949.485651277835, 00:35:14.015 "mibps": 77.92767832530404, 00:35:14.015 "io_failed": 0, 00:35:14.015 "io_timeout": 0, 00:35:14.015 "avg_latency_us": 6405.587918312147, 00:35:14.015 "min_latency_us": 3325.345185185185, 00:35:14.015 "max_latency_us": 13592.651851851851 00:35:14.015 } 00:35:14.015 ], 00:35:14.015 "core_count": 1 00:35:14.015 } 00:35:14.273 01:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:14.273 01:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:14.273 01:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:14.273 01:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:14.273 | select(.opcode=="crc32c") 00:35:14.273 | "\(.module_name) \(.executed)"' 00:35:14.273 01:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1055193 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1055193 ']' 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1055193 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1055193 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1055193' 00:35:14.534 killing process with pid 1055193 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1055193 00:35:14.534 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.534 00:35:14.534 Latency(us) 00:35:14.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.534 =================================================================================================================== 00:35:14.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.534 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1055193 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1055603 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1055603 /var/tmp/bperf.sock 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1055603 ']' 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:14.793 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:14.793 [2024-10-01 01:52:54.471280] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:14.793 [2024-10-01 01:52:54.471370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055603 ] 00:35:14.793 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.793 Zero copy mechanism will not be used. 00:35:14.793 [2024-10-01 01:52:54.534297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.793 [2024-10-01 01:52:54.621824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.051 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:15.051 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:15.051 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:15.051 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:15.051 01:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:15.310 01:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.310 01:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.568 nvme0n1 00:35:15.568 01:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:15.568 01:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.828 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.828 Zero copy mechanism will not be used. 00:35:15.828 Running I/O for 2 seconds... 00:35:17.707 3997.00 IOPS, 499.62 MiB/s 4059.50 IOPS, 507.44 MiB/s 00:35:17.707 Latency(us) 00:35:17.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.707 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:17.707 nvme0n1 : 2.01 4057.26 507.16 0.00 0.00 3933.60 2803.48 9854.67 00:35:17.707 =================================================================================================================== 00:35:17.707 Total : 4057.26 507.16 0.00 0.00 3933.60 2803.48 9854.67 00:35:17.707 { 00:35:17.707 "results": [ 00:35:17.707 { 00:35:17.707 "job": "nvme0n1", 00:35:17.707 "core_mask": "0x2", 00:35:17.707 "workload": "randwrite", 00:35:17.707 "status": "finished", 00:35:17.707 "queue_depth": 16, 00:35:17.707 "io_size": 131072, 00:35:17.707 "runtime": 2.005049, 00:35:17.707 "iops": 4057.2574535584918, 00:35:17.707 "mibps": 507.15718169481147, 00:35:17.707 "io_failed": 0, 00:35:17.707 "io_timeout": 0, 00:35:17.707 "avg_latency_us": 3933.60263042637, 00:35:17.707 "min_latency_us": 2803.4844444444443, 00:35:17.707 "max_latency_us": 9854.672592592593 00:35:17.707 } 00:35:17.707 ], 00:35:17.707 "core_count": 1 00:35:17.707 } 00:35:17.966 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:17.966 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:17.966 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:17.966 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:17.966 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:17.966 | select(.opcode=="crc32c") 00:35:17.966 | "\(.module_name) \(.executed)"' 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1055603 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1055603 ']' 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1055603 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1055603 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1055603' 00:35:18.224 killing process with pid 1055603 00:35:18.224 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1055603 00:35:18.224 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.224 00:35:18.224 Latency(us) 00:35:18.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.225 =================================================================================================================== 00:35:18.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.225 01:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1055603 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1054232 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1054232 ']' 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1054232 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1054232 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1054232' 00:35:18.484 killing process with pid 1054232 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1054232 00:35:18.484 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1054232 00:35:18.744 00:35:18.744 real 0m15.622s 00:35:18.744 user 0m31.501s 00:35:18.744 sys 0m4.100s 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:18.744 ************************************ 00:35:18.744 END TEST nvmf_digest_clean 00:35:18.744 ************************************ 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:18.744 ************************************ 00:35:18.744 START TEST nvmf_digest_error 00:35:18.744 ************************************ 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=1056135 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 1056135 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1056135 ']' 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:18.744 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.744 [2024-10-01 01:52:58.456834] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:18.744 [2024-10-01 01:52:58.456920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.744 [2024-10-01 01:52:58.522274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.005 [2024-10-01 01:52:58.610945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.005 [2024-10-01 01:52:58.611007] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.005 [2024-10-01 01:52:58.611038] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.005 [2024-10-01 01:52:58.611050] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.005 [2024-10-01 01:52:58.611059] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.005 [2024-10-01 01:52:58.611086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.005 [2024-10-01 01:52:58.711733] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.005 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.005 null0 00:35:19.005 [2024-10-01 01:52:58.832947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.005 [2024-10-01 01:52:58.857226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1056180 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1056180 /var/tmp/bperf.sock 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1056180 ']' 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.264 01:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.264 [2024-10-01 01:52:58.907574] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:19.264 [2024-10-01 01:52:58.907637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056180 ] 00:35:19.264 [2024-10-01 01:52:58.970500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.264 [2024-10-01 01:52:59.061805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.522 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:19.522 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:19.522 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.522 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.779 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:19.779 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.779 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.779 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.779 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.779 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.038 nvme0n1 00:35:20.038 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:20.038 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.038 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.038 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.038 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:20.038 01:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.296 Running I/O for 2 seconds... 00:35:20.296 [2024-10-01 01:52:59.988778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.296 [2024-10-01 01:52:59.988829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.296 [2024-10-01 01:52:59.988858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.296 [2024-10-01 01:53:00.009070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.009110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.009129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.029098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.029145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.029163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.049840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.049890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.049910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.063571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.063618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.063645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.082726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.082765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.082785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.102289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.102336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.102355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.116736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.116773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.116800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.297 [2024-10-01 01:53:00.136797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.297 [2024-10-01 01:53:00.136834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.297 [2024-10-01 01:53:00.136854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.158473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.158512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.158535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.178260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.178291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.178313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.192611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.192649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.192671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.210524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.210561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.210587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.229914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.229952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.229971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.249834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.249871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.249891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.269378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.269415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.269435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.289050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.289080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.289101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.307329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.307367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.307387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.322196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.322226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.322243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.341349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.341387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.341408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.360969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.361014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.361050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.375211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.375242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.375271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.557 [2024-10-01 01:53:00.393358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.557 [2024-10-01 01:53:00.393404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.557 [2024-10-01 01:53:00.393425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.414773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.414812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.414832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.434173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.434205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.434221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.453329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.453367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.453387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.473098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.473129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.473145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.493645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.493684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.493704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.512705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.512763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.526709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.526756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.526777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.548011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.548064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.548082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.567691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.567728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.567748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.586836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.586874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.586895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.607461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.607501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.607521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.626656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.627163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.627187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.641122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.641167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.641185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.816 [2024-10-01 01:53:00.660665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:20.816 [2024-10-01 01:53:00.660703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.816 [2024-10-01 01:53:00.660723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.681780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.681819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.681840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.700856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.700894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.700915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.715255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.715287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.715318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.734182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.734223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.734241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.752880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.752917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.752937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.767600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.767637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.767657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.786571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.786609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.786629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.807483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.807521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.821267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.821315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.821335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.842303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.076 [2024-10-01 01:53:00.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.076 [2024-10-01 01:53:00.842350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.076 [2024-10-01 01:53:00.862081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.077 [2024-10-01 01:53:00.862112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.077 [2024-10-01 01:53:00.862135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.077 [2024-10-01 01:53:00.879323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.077 [2024-10-01 01:53:00.879374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.077 [2024-10-01 01:53:00.879394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.077 [2024-10-01 01:53:00.898777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.077 [2024-10-01 01:53:00.898815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.077 [2024-10-01 01:53:00.898835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.077 [2024-10-01 01:53:00.918112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.077 [2024-10-01 01:53:00.918143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.077 [2024-10-01 01:53:00.918160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:00.932884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:00.932922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:00.932943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:00.951906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:00.951943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:00.951964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:00.967579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:00.967617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:00.967637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 13816.00 IOPS, 53.97 MiB/s [2024-10-01 01:53:00.984194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:00.984226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:00.984243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.002259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.002301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.002318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.016895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.016933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.016953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.035732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.035770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.035790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.054805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.054841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.054861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.074785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.075261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.075290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.093867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.093903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.093923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.113027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.113086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.113103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.338 [2024-10-01 01:53:01.127205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.338 [2024-10-01 01:53:01.127238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.338 [2024-10-01 01:53:01.127270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.339 [2024-10-01 01:53:01.144880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.339 [2024-10-01 01:53:01.144917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.339 [2024-10-01 01:53:01.144936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.339 [2024-10-01 01:53:01.161813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.339 [2024-10-01 01:53:01.161852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.339 [2024-10-01 01:53:01.161882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.339 [2024-10-01 01:53:01.181015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.339 [2024-10-01 01:53:01.181065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.339 [2024-10-01 01:53:01.181082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.199010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.199048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.199081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.213847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.213884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.213904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.234216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.234247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.234263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.247915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.247952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.247972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.268135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.268644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.286157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.286205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.305909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.305946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.305966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.323813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.323861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.323882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.343806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.343843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.343863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.361808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.361845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.361866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.376060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.376094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.376126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.394201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.394248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.394266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.411414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.411549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.411567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.430318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.430350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.430367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.600 [2024-10-01 01:53:01.442632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.600 [2024-10-01 01:53:01.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.600 [2024-10-01 01:53:01.442680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.461502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.461535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.461553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.479648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.479679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.479696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.493222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.493254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.493270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.512223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.512256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.512273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.530407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.530438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.530455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.548821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.548869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.564508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.564555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.577843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.860 [2024-10-01 01:53:01.577874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.860 [2024-10-01 01:53:01.577890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.860 [2024-10-01 01:53:01.596274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.596322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.596340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.861 [2024-10-01 01:53:01.613301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.613332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.613355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.861 [2024-10-01 01:53:01.630882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.630914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.630930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.861 [2024-10-01 01:53:01.648378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.648408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.648425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.861 [2024-10-01 01:53:01.661285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.661316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.661332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.861 [2024-10-01 01:53:01.680571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.680602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.680618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.861 [2024-10-01 01:53:01.698478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:21.861 [2024-10-01 01:53:01.698509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.861 [2024-10-01 01:53:01.698525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.716209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.716244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.716262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.734673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.734704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.734721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.752620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.752652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.752669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.772457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.772489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.772506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.785062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.785095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.785113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.802842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.802873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.802890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.822699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.822730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.822747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.841775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.841807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.841824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.854101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.854135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.854153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.872963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.873020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.873039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.889900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.889934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.906632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.906663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.906687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.919974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.920029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.920047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.939552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.939583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.939600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 [2024-10-01 01:53:01.956263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.956296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.956328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.122 14264.50 IOPS, 55.72 MiB/s [2024-10-01 01:53:01.975194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c69b10) 00:35:22.122 [2024-10-01 01:53:01.975225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.122 [2024-10-01 01:53:01.975241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.381 00:35:22.381 Latency(us) 00:35:22.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.381 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:22.381 nvme0n1 : 2.01 14276.54 55.77 0.00 0.00 8957.24 4077.80 28738.75 00:35:22.381 =================================================================================================================== 00:35:22.381 Total : 14276.54 55.77 0.00 0.00 8957.24 4077.80 28738.75 00:35:22.381 { 00:35:22.381 "results": [ 00:35:22.381 { 00:35:22.381 "job": "nvme0n1", 00:35:22.381 "core_mask": "0x2", 00:35:22.381 "workload": "randread", 00:35:22.381 "status": "finished", 00:35:22.381 "queue_depth": 128, 00:35:22.381 "io_size": 4096, 00:35:22.381 "runtime": 2.007279, 00:35:22.381 "iops": 14276.540530738377, 00:35:22.381 "mibps": 55.767736448196786, 00:35:22.381 "io_failed": 0, 00:35:22.381 "io_timeout": 0, 00:35:22.381 "avg_latency_us": 8957.237165710918, 00:35:22.381 "min_latency_us": 4077.7955555555554, 00:35:22.381 "max_latency_us": 28738.74962962963 00:35:22.381 } 00:35:22.381 ], 00:35:22.381 "core_count": 1 00:35:22.381 } 00:35:22.381 01:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:22.381 01:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:22.381 01:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:22.381 01:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:22.381 | .driver_specific 00:35:22.381 | .nvme_error 00:35:22.381 | .status_code 00:35:22.381 | .command_transient_transport_error' 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1056180 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1056180 ']' 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1056180 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056180 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056180' 00:35:22.641 killing process with pid 1056180 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1056180 00:35:22.641 Received shutdown signal, test time was about 2.000000 seconds 00:35:22.641 00:35:22.641 Latency(us) 00:35:22.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.641 =================================================================================================================== 00:35:22.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:22.641 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1056180 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1056589 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1056589 /var/tmp/bperf.sock 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1056589 ']' 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:22.900 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.900 [2024-10-01 01:53:02.594420] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:22.900 [2024-10-01 01:53:02.594513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056589 ] 00:35:22.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.900 Zero copy mechanism will not be used. 00:35:22.900 [2024-10-01 01:53:02.657820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.900 [2024-10-01 01:53:02.747618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.157 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:23.157 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:23.157 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:23.157 01:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:23.415 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:23.415 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.415 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.415 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.415 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.415 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.983 nvme0n1 00:35:23.983 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:23.983 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.983 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.983 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.983 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:23.983 01:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.983 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:23.983 Zero copy mechanism will not be used. 00:35:23.983 Running I/O for 2 seconds... 00:35:23.983 [2024-10-01 01:53:03.771149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.771210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.771230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.779729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.779767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.779797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.788370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.788408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.788435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.796964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.797018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.797057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.805759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.805796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.814466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.814503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.814524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.823165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.823197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.823215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.983 [2024-10-01 01:53:03.831845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:23.983 [2024-10-01 01:53:03.831881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.983 [2024-10-01 01:53:03.831904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.840724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.840761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.840785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.849756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.849792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.849816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.858396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.858454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.866326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.866362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.866388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.874829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.874865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.874885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.883429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.883468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.883498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.891481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.891517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.891536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.899415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.899451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.899472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.908622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.908658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.908678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.917073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.917106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.917139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.925562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.925597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.925617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.933749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.933784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.933805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.941901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.242 [2024-10-01 01:53:03.941947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.242 [2024-10-01 01:53:03.941967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.242 [2024-10-01 01:53:03.950846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.950882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.950911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:03.958836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.958872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.958892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:03.963929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.963965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.963985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:03.972425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.972462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.972483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:03.980928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.980964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.980987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:03.989221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.989255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.989272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:03.998041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:03.998073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:03.998092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.007105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.007137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.007155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.016974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.017026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.017061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.026677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.026714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.026746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.036200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.036233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.036252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.045486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.045523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.045543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.055499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.055536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.055564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.065528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.065565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.065585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.074546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.074595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.074619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.082610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.082647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.082667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.243 [2024-10-01 01:53:04.090830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.243 [2024-10-01 01:53:04.090867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.243 [2024-10-01 01:53:04.090895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.100260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.100294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.100333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.109245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.109293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.109311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.118689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.118726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.118745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.128260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.128305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.128322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.137843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.137876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.137899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.147086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.147118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.147136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.156545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.156575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.156599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.166166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.166199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.166217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.175628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.175682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.175700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.185191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.185225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.185243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.195061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.195093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.195113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.501 [2024-10-01 01:53:04.204755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.501 [2024-10-01 01:53:04.204790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.501 [2024-10-01 01:53:04.204810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.214619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.214655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.214674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.224460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.224496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.224523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.233296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.233328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.233346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.242988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.243033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.243068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.252434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.252470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.252490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.262381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.262417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.262437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.270892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.270928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.270948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.280509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.280545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.280565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.289972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.290014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.290051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.298223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.298255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.298273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.306971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.307024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.307065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.314933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.314968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.314988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.322605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.322640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.322662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.330275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.330318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.330370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.338094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.338127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.338145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.345852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.345887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.502 [2024-10-01 01:53:04.353541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.502 [2024-10-01 01:53:04.353576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.502 [2024-10-01 01:53:04.353596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.760 [2024-10-01 01:53:04.361257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.760 [2024-10-01 01:53:04.361310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.760 [2024-10-01 01:53:04.361330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.760 [2024-10-01 01:53:04.368962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.760 [2024-10-01 01:53:04.369015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.760 [2024-10-01 01:53:04.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.760 [2024-10-01 01:53:04.376805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.760 [2024-10-01 01:53:04.376840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.376863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.384864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.384899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.384918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.392528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.392562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.392592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.400189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.400238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.407841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.407876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.407896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.415574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.415607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.415626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.423208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.423240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.423257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.430932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.430966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.430993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.439104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.439136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.439154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.446952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.446995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.447023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.454876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.454910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.454930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.462597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.462631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.462657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.470228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.470259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.470276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.477857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.477890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.477913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.485523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.485557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.485576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.493186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.493217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.493235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.500968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.501009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.501031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.508663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.508696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.508715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.516244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.516274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.516309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.524014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.524060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.524077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.531735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.531774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.531793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.539459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.539493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.539511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.547151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.547182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.547199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.554794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.554828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.562428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.562461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.562480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.761 [2024-10-01 01:53:04.570046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.761 [2024-10-01 01:53:04.570091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.761 [2024-10-01 01:53:04.570108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.762 [2024-10-01 01:53:04.577950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.762 [2024-10-01 01:53:04.577984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.762 [2024-10-01 01:53:04.578015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.762 [2024-10-01 01:53:04.585676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.762 [2024-10-01 01:53:04.585711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.762 [2024-10-01 01:53:04.585733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.762 [2024-10-01 01:53:04.593315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.762 [2024-10-01 01:53:04.593349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.762 [2024-10-01 01:53:04.593367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.762 [2024-10-01 01:53:04.601145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.762 [2024-10-01 01:53:04.601177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.762 [2024-10-01 01:53:04.601195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.762 [2024-10-01 01:53:04.608814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:24.762 [2024-10-01 01:53:04.608847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.762 [2024-10-01 01:53:04.608866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.021 [2024-10-01 01:53:04.616527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.021 [2024-10-01 01:53:04.616562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.616587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.624077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.624108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.624125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.631722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.631756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.631776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.639396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.639430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.639449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.647178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.647209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.647228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.654755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.654788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.654808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.662337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.662371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.662398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.670196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.670229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.670247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.677784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.677817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.677836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.685435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.685469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.685488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.693376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.693412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.693431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.701068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.701099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.701117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.708862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.708896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.708916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.716540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.716575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.716595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.724023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.724071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.724089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.732273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.732325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.732347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.739877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.739912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.739932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.747660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.747695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.747714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.755226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.755257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.755275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.022 3720.00 IOPS, 465.00 MiB/s [2024-10-01 01:53:04.763818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.763853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.763873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.771502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.771536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.779072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.779104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.786748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.786782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.786801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.794284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.794318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.794343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.801869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.022 [2024-10-01 01:53:04.801904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.022 [2024-10-01 01:53:04.801923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.022 [2024-10-01 01:53:04.810411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.810447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.810466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.815733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.815770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.815790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.822412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.822447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.822466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.830955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.830992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.831021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.839201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.839232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.839250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.847729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.847765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.847784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.855812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.855848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.855868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.864066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.864103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.864120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.023 [2024-10-01 01:53:04.872673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.023 [2024-10-01 01:53:04.872709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.023 [2024-10-01 01:53:04.872730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.282 [2024-10-01 01:53:04.880874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.880910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.880930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.889553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.889590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.889610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.897800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.897836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.897856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.906196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.906243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.906260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.914587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.914623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.914644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.922907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.922943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.922963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.930968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.931011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.931033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.939387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.939423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.947667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.947704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.947723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.956023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.956068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.956085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.964310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.964347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.964367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.972559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.972595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.972614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.980808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.980844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.980863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.989261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.989293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.989324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:04.997677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:04.997713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:04.997733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.005856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.005893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.005923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.014306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.014353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.014374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.022688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.022725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.022745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.030902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.030938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.030958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.039136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.039167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.039184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.047638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.047674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.047694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.055803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.055839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.055858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.064358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.064406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.064426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.072589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.072625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.072645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.283 [2024-10-01 01:53:05.080892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.283 [2024-10-01 01:53:05.080927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.283 [2024-10-01 01:53:05.080947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.088441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.088477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.088496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.096080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.096110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.096141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.104155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.104186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.104204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.112255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.112301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.112317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.120197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.120229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.120247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.127740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.127776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.127796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.284 [2024-10-01 01:53:05.135408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.284 [2024-10-01 01:53:05.135443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.284 [2024-10-01 01:53:05.135462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.143048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.143082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.143105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.150912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.150948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.150968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.158521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.158556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.158575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.166154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.166187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.166205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.173742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.173777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.173797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.181402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.181437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.181457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.189105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.189151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.189168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.196748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.196783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.196802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.204279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.204326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.204345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.212264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.212301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.212319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.220068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.220099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.220116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.227624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.227670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.227691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.235216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.235261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.235277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.242929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.242964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.242983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.250484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.250519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.250538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.258108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.258155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.258172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.265804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.265838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.265857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.273449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.544 [2024-10-01 01:53:05.273484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.544 [2024-10-01 01:53:05.273503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.544 [2024-10-01 01:53:05.281150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.281181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.281198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.289189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.289222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.289241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.296678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.296714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.304335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.304370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.304389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.312258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.312307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.312327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.320262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.320310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.320327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.327901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.327937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.327956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.335491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.335527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.335546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.343082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.343112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.343134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.350810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.350846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.350865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.358454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.358488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.358507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.365989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.366046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.366063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.373661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.373697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.373716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.381269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.381300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.388857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.388891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.388910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.545 [2024-10-01 01:53:05.396480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.545 [2024-10-01 01:53:05.396515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.545 [2024-10-01 01:53:05.396535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.404180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.404227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.404244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.412111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.412141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.412159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.419866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.419901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.419920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.427457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.427512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.435404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.435439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.435460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.443056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.443087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.443120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.450816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.450851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.450870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.458365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.458399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.458418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.466073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.466119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.466137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.473654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.473688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.481318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.481365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.481384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.489312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.489357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.489377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.497036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.497085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.497103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.504668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.504702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.504722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.512256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.512301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.512318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.519944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.519978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.520005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.806 [2024-10-01 01:53:05.527694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.806 [2024-10-01 01:53:05.527729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.806 [2024-10-01 01:53:05.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.535270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.535310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.535341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.543244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.543297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.543315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.550991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.551048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.551065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.558547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.558582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.558602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.566115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.566145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.566177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.573780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.573815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.573835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.581406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.581440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.581460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.589090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.589135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.589152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.596819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.596853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.596873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.604397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.604432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.604451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.612128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.612159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.612176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.620271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.620304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.620321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.627892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.627926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.627945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.635452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.635487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.635506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.643008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.643057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.643075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.650695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.650729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.650749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.807 [2024-10-01 01:53:05.658373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:25.807 [2024-10-01 01:53:05.658408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.807 [2024-10-01 01:53:05.658428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.665934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.665968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.665987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.673646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.673681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.673707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.681328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.681363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.681382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.689117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.689170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.689188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.696915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.696950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.696970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.704578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.704613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.704631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.712141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.712173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.712190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.719942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.719978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.720007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.727553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.727588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.727608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.735188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.735218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.735235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.742909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.742944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.742962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.750430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.750464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.750483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:26.066 [2024-10-01 01:53:05.758114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.758144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.758160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:26.066 3837.50 IOPS, 479.69 MiB/s [2024-10-01 01:53:05.766695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21e7390) 00:35:26.066 [2024-10-01 01:53:05.766731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.066 [2024-10-01 01:53:05.766750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.066 00:35:26.066 Latency(us) 00:35:26.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.067 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:26.067 nvme0n1 : 2.01 3835.53 479.44 0.00 0.00 4165.82 743.35 10194.49 00:35:26.067 =================================================================================================================== 00:35:26.067 Total : 3835.53 479.44 0.00 0.00 4165.82 743.35 10194.49 00:35:26.067 { 00:35:26.067 "results": [ 00:35:26.067 { 00:35:26.067 "job": "nvme0n1", 00:35:26.067 "core_mask": "0x2", 00:35:26.067 "workload": "randread", 00:35:26.067 "status": "finished", 00:35:26.067 "queue_depth": 16, 00:35:26.067 "io_size": 131072, 00:35:26.067 "runtime": 2.00546, 00:35:26.067 "iops": 3835.5290058141272, 00:35:26.067 "mibps": 479.4411257267659, 00:35:26.067 "io_failed": 0, 00:35:26.067 "io_timeout": 0, 00:35:26.067 "avg_latency_us": 4165.81585100441, 00:35:26.067 "min_latency_us": 743.3481481481482, 00:35:26.067 "max_latency_us": 10194.488888888889 00:35:26.067 } 00:35:26.067 ], 00:35:26.067 "core_count": 1 00:35:26.067 } 00:35:26.067 01:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:26.067 01:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:26.067 01:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:26.067 01:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:26.067 | .driver_specific 00:35:26.067 | .nvme_error 00:35:26.067 | .status_code 00:35:26.067 | .command_transient_transport_error' 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 248 > 0 )) 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1056589 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1056589 ']' 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1056589 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056589 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056589' 00:35:26.324 killing process with pid 1056589 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1056589 00:35:26.324 Received shutdown signal, test time was about 2.000000 seconds 00:35:26.324 00:35:26.324 Latency(us) 00:35:26.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.324 =================================================================================================================== 00:35:26.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:26.324 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1056589 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1057004 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1057004 /var/tmp/bperf.sock 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1057004 ']' 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:26.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:26.582 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.582 [2024-10-01 01:53:06.357761] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:26.582 [2024-10-01 01:53:06.357842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057004 ] 00:35:26.582 [2024-10-01 01:53:06.424463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.840 [2024-10-01 01:53:06.518890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.840 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.840 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:26.840 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.840 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:27.098 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:27.098 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.098 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.098 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.098 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.098 01:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:27.665 nvme0n1 00:35:27.665 01:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:27.665 01:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.665 01:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:27.665 01:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.665 01:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:27.665 01:53:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:27.923 Running I/O for 2 seconds... 00:35:27.923 [2024-10-01 01:53:07.542523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ee5c8 00:35:27.923 [2024-10-01 01:53:07.543613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.923 [2024-10-01 01:53:07.543666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:27.923 [2024-10-01 01:53:07.555069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fac10 00:35:27.923 [2024-10-01 01:53:07.556107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.923 [2024-10-01 01:53:07.556138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:27.923 [2024-10-01 01:53:07.568670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eaef0 00:35:27.924 [2024-10-01 01:53:07.569889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.582253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e0ea0 00:35:27.924 [2024-10-01 01:53:07.583669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.583714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.595800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e84c0 00:35:27.924 [2024-10-01 01:53:07.597323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.597367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.609281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fac10 00:35:27.924 [2024-10-01 01:53:07.611021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.611067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.622533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f1430 00:35:27.924 [2024-10-01 01:53:07.624441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.624489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.636031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e27f0 00:35:27.924 [2024-10-01 01:53:07.638110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.638139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.645007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198df550 00:35:27.924 [2024-10-01 01:53:07.645834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.645879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.658389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f1ca0 00:35:27.924 [2024-10-01 01:53:07.659414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.659460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.671790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fc998 00:35:27.924 [2024-10-01 01:53:07.672988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.673042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.684010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e1710 00:35:27.924 [2024-10-01 01:53:07.685198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.685242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.698279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ff3c8 00:35:27.924 [2024-10-01 01:53:07.699678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.699733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.711319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e6738 00:35:27.924 [2024-10-01 01:53:07.712883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.712927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.722148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f3e60 00:35:27.924 [2024-10-01 01:53:07.722823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.722852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.735568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eaef0 00:35:27.924 [2024-10-01 01:53:07.736461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.736491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.749071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f9f68 00:35:27.924 [2024-10-01 01:53:07.750198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.750232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.761103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f5378 00:35:27.924 [2024-10-01 01:53:07.762943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.762976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:27.924 [2024-10-01 01:53:07.772092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198feb58 00:35:27.924 [2024-10-01 01:53:07.772919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.924 [2024-10-01 01:53:07.772952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.785441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fb480 00:35:28.185 [2024-10-01 01:53:07.786469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.786513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.798820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eaef0 00:35:28.185 [2024-10-01 01:53:07.800024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.800073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.812333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eee38 00:35:28.185 [2024-10-01 01:53:07.813746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.825871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e8d30 00:35:28.185 [2024-10-01 01:53:07.827446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.827480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.838723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fb480 00:35:28.185 [2024-10-01 01:53:07.840230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.840261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.849956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7538 00:35:28.185 [2024-10-01 01:53:07.851032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.851062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.862038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f46d0 00:35:28.185 [2024-10-01 01:53:07.863046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.863076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.874384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fe720 00:35:28.185 [2024-10-01 01:53:07.875666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.875696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.885408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f57b0 00:35:28.185 [2024-10-01 01:53:07.887123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.887154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.895611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e6b70 00:35:28.185 [2024-10-01 01:53:07.896368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.896396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.908093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fc998 00:35:28.185 [2024-10-01 01:53:07.908991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.909043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.921375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e7818 00:35:28.185 [2024-10-01 01:53:07.922483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.922531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.933554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f2d80 00:35:28.185 [2024-10-01 01:53:07.934784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.934827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.944713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e95a0 00:35:28.185 [2024-10-01 01:53:07.945895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.945939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.957054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eb328 00:35:28.185 [2024-10-01 01:53:07.958427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.958471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.969389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e0ea0 00:35:28.185 [2024-10-01 01:53:07.970852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.970896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.981660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eff18 00:35:28.185 [2024-10-01 01:53:07.983239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.983268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:07.993893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fe2e8 00:35:28.185 [2024-10-01 01:53:07.995668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:07.995697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:08.002214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ef6a8 00:35:28.185 [2024-10-01 01:53:08.002949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:08.002992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:08.014686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f92c0 00:35:28.185 [2024-10-01 01:53:08.015621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:08.015671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:28.185 [2024-10-01 01:53:08.027065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e38d0 00:35:28.185 [2024-10-01 01:53:08.028164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.185 [2024-10-01 01:53:08.028194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.039657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ed920 00:35:28.446 [2024-10-01 01:53:08.040945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.040976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.050910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f3e60 00:35:28.446 [2024-10-01 01:53:08.052162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.052192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.063344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ebfd0 00:35:28.446 [2024-10-01 01:53:08.064692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.064735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.074381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e99d8 00:35:28.446 [2024-10-01 01:53:08.075283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.075312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.086283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eaab8 00:35:28.446 [2024-10-01 01:53:08.087109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.087140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.098591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ef270 00:35:28.446 [2024-10-01 01:53:08.099594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.099624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.109711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f6020 00:35:28.446 [2024-10-01 01:53:08.111393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.446 [2024-10-01 01:53:08.111428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:28.446 [2024-10-01 01:53:08.121662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eea00 00:35:28.446 [2024-10-01 01:53:08.123033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.123071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.133382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eee38 00:35:28.447 [2024-10-01 01:53:08.134439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.134467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.144564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f8e88 00:35:28.447 [2024-10-01 01:53:08.145602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.145645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.157786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ea680 00:35:28.447 [2024-10-01 01:53:08.159032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.159079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.170057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f5378 00:35:28.447 [2024-10-01 01:53:08.171444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.171487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.181192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fc998 00:35:28.447 [2024-10-01 01:53:08.182538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.182581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.192228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4298 00:35:28.447 [2024-10-01 01:53:08.193142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.193172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.204248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ec408 00:35:28.447 [2024-10-01 01:53:08.205093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.205123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.216662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f0ff8 00:35:28.447 [2024-10-01 01:53:08.217686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.217717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.227881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e7818 00:35:28.447 [2024-10-01 01:53:08.229563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.229594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.237971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e27f0 00:35:28.447 [2024-10-01 01:53:08.238765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.238808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.250370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f1868 00:35:28.447 [2024-10-01 01:53:08.251225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.251269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.262606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ec408 00:35:28.447 [2024-10-01 01:53:08.263648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.263692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.274814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ecc78 00:35:28.447 [2024-10-01 01:53:08.276018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.276048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.286736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e6738 00:35:28.447 [2024-10-01 01:53:08.287967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.288017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:28.447 [2024-10-01 01:53:08.298447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198edd58 00:35:28.447 [2024-10-01 01:53:08.299232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.447 [2024-10-01 01:53:08.299262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.312130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e9e10 00:35:28.708 [2024-10-01 01:53:08.313851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.313881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.324598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198de8a8 00:35:28.708 [2024-10-01 01:53:08.326426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.326470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.332951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fef90 00:35:28.708 [2024-10-01 01:53:08.333715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.333759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.345351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4b08 00:35:28.708 [2024-10-01 01:53:08.346215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.346259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.357685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ddc00 00:35:28.708 [2024-10-01 01:53:08.358808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.358837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.368911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e1710 00:35:28.708 [2024-10-01 01:53:08.369956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.370007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.381325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198df550 00:35:28.708 [2024-10-01 01:53:08.382545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.382588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.393687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eb760 00:35:28.708 [2024-10-01 01:53:08.395016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.395059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.406078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e88f8 00:35:28.708 [2024-10-01 01:53:08.407563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.407592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.418470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f2510 00:35:28.708 [2024-10-01 01:53:08.420121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.420166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.430706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e5658 00:35:28.708 [2024-10-01 01:53:08.432551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.432603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.439113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f0bc0 00:35:28.708 [2024-10-01 01:53:08.439853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.439898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.450288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fbcf0 00:35:28.708 [2024-10-01 01:53:08.451019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.451063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.462635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e7c50 00:35:28.708 [2024-10-01 01:53:08.463532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.463576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.475850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f9b30 00:35:28.708 [2024-10-01 01:53:08.476945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.476993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.487829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f6cc8 00:35:28.708 [2024-10-01 01:53:08.488916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.488961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:28.708 [2024-10-01 01:53:08.500059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fb048 00:35:28.708 [2024-10-01 01:53:08.501191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.708 [2024-10-01 01:53:08.501236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.709 [2024-10-01 01:53:08.512248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4b08 00:35:28.709 [2024-10-01 01:53:08.513591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.709 [2024-10-01 01:53:08.513635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:28.709 [2024-10-01 01:53:08.523504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f46d0 00:35:28.709 [2024-10-01 01:53:08.524820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.709 [2024-10-01 01:53:08.524864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:28.709 20991.00 IOPS, 82.00 MiB/s [2024-10-01 01:53:08.533775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4298 00:35:28.709 [2024-10-01 01:53:08.534525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.709 [2024-10-01 01:53:08.534572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.709 [2024-10-01 01:53:08.546070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4b08 00:35:28.709 [2024-10-01 01:53:08.546930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.709 [2024-10-01 01:53:08.546974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.709 [2024-10-01 01:53:08.557287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f0bc0 00:35:28.709 [2024-10-01 01:53:08.558171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.709 [2024-10-01 01:53:08.558216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.569929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e5220 00:35:28.970 [2024-10-01 01:53:08.571027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.571057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.582376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ee5c8 00:35:28.970 [2024-10-01 01:53:08.583548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.583590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.594726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e9e10 00:35:28.970 [2024-10-01 01:53:08.596116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.596146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.606756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4298 00:35:28.970 [2024-10-01 01:53:08.608120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.608165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.618556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f9f68 00:35:28.970 [2024-10-01 01:53:08.619543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.619588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.629439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ec840 00:35:28.970 [2024-10-01 01:53:08.630621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.630653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.641224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fa3a0 00:35:28.970 [2024-10-01 01:53:08.642149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.642178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.653596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e3498 00:35:28.970 [2024-10-01 01:53:08.654592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.654637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.665599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e7818 00:35:28.970 [2024-10-01 01:53:08.666705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.970 [2024-10-01 01:53:08.677895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f6890 00:35:28.970 [2024-10-01 01:53:08.679094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.970 [2024-10-01 01:53:08.679139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.688939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f3e60 00:35:28.971 [2024-10-01 01:53:08.690153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.690196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.701340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fe2e8 00:35:28.971 [2024-10-01 01:53:08.702663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.702706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.713750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7100 00:35:28.971 [2024-10-01 01:53:08.715281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.715325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.726267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fb480 00:35:28.971 [2024-10-01 01:53:08.727912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.727955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.738638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f96f8 00:35:28.971 [2024-10-01 01:53:08.740488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.740537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.747197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f1868 00:35:28.971 [2024-10-01 01:53:08.747931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.747974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.760558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f6890 00:35:28.971 [2024-10-01 01:53:08.762444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.762475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.770655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f20d8 00:35:28.971 [2024-10-01 01:53:08.771562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.771605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.782733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e01f8 00:35:28.971 [2024-10-01 01:53:08.783652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.783696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.794990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e0ea0 00:35:28.971 [2024-10-01 01:53:08.795889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.795932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.807214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7da8 00:35:28.971 [2024-10-01 01:53:08.808021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.808051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.971 [2024-10-01 01:53:08.819653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f1ca0 00:35:28.971 [2024-10-01 01:53:08.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.971 [2024-10-01 01:53:08.820656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.831003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fd640 00:35:29.233 [2024-10-01 01:53:08.832704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.832735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.843345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198de470 00:35:29.233 [2024-10-01 01:53:08.845257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.845288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.854560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e6300 00:35:29.233 [2024-10-01 01:53:08.855505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.855551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.866886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f8a50 00:35:29.233 [2024-10-01 01:53:08.867919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.867962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.878021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7538 00:35:29.233 [2024-10-01 01:53:08.879023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.879051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.891197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e0ea0 00:35:29.233 [2024-10-01 01:53:08.892485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.892519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.904522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e01f8 00:35:29.233 [2024-10-01 01:53:08.906247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.906276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.915528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eff18 00:35:29.233 [2024-10-01 01:53:08.916922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.916952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.926303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fdeb0 00:35:29.233 [2024-10-01 01:53:08.928136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.928167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.233 [2024-10-01 01:53:08.936519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ecc78 00:35:29.233 [2024-10-01 01:53:08.937366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.233 [2024-10-01 01:53:08.937410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:08.948933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ef6a8 00:35:29.234 [2024-10-01 01:53:08.949950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:08.949979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:08.960913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f8e88 00:35:29.234 [2024-10-01 01:53:08.961931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:08.961975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:08.974882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7da8 00:35:29.234 [2024-10-01 01:53:08.976372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:08.976400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:08.986100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fda78 00:35:29.234 [2024-10-01 01:53:08.987563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:08.987606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:08.997104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ebb98 00:35:29.234 [2024-10-01 01:53:08.998131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:08.998161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.009080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f5be8 00:35:29.234 [2024-10-01 01:53:09.010057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.010087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.021460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fa3a0 00:35:29.234 [2024-10-01 01:53:09.022598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.035116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e2c28 00:35:29.234 [2024-10-01 01:53:09.037076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.037105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.043542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eaab8 00:35:29.234 [2024-10-01 01:53:09.044377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.044427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.055869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f46d0 00:35:29.234 [2024-10-01 01:53:09.056933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.056976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.067176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fb048 00:35:29.234 [2024-10-01 01:53:09.068163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.068208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.234 [2024-10-01 01:53:09.079542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7da8 00:35:29.234 [2024-10-01 01:53:09.080812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.234 [2024-10-01 01:53:09.080843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.092224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f8618 00:35:29.494 [2024-10-01 01:53:09.093554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.093597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.104518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198efae0 00:35:29.494 [2024-10-01 01:53:09.105983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.106036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.115510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e4140 00:35:29.494 [2024-10-01 01:53:09.116584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.116628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.127402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ff3c8 00:35:29.494 [2024-10-01 01:53:09.128339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.128370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.140683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e99d8 00:35:29.494 [2024-10-01 01:53:09.141874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.141905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.152837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f6890 00:35:29.494 [2024-10-01 01:53:09.154858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.154892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.164761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f3a28 00:35:29.494 [2024-10-01 01:53:09.165784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.165832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.178042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ee5c8 00:35:29.494 [2024-10-01 01:53:09.179241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.494 [2024-10-01 01:53:09.179271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.494 [2024-10-01 01:53:09.191434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198efae0 00:35:29.494 [2024-10-01 01:53:09.192816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.192859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.203520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7da8 00:35:29.495 [2024-10-01 01:53:09.204857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.204885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.216986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198dfdc0 00:35:29.495 [2024-10-01 01:53:09.218483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.218517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.230412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ff3c8 00:35:29.495 [2024-10-01 01:53:09.232101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.232130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.243887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e7818 00:35:29.495 [2024-10-01 01:53:09.245762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.245796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.257353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4f40 00:35:29.495 [2024-10-01 01:53:09.259371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.259414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.270710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198df118 00:35:29.495 [2024-10-01 01:53:09.272901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.272931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.279835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198eee38 00:35:29.495 [2024-10-01 01:53:09.280858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.280887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.293253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e4578 00:35:29.495 [2024-10-01 01:53:09.294446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.294489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.306649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ff3c8 00:35:29.495 [2024-10-01 01:53:09.308003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.308047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.320034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ee190 00:35:29.495 [2024-10-01 01:53:09.321570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.321604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.333448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7da8 00:35:29.495 [2024-10-01 01:53:09.335151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.335182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.495 [2024-10-01 01:53:09.344396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e6b70 00:35:29.495 [2024-10-01 01:53:09.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.495 [2024-10-01 01:53:09.345265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.357935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198efae0 00:35:29.754 [2024-10-01 01:53:09.358951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.358982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.371356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e2c28 00:35:29.754 [2024-10-01 01:53:09.372565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.372601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.383330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e6fa8 00:35:29.754 [2024-10-01 01:53:09.385376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.385411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.394328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198df550 00:35:29.754 [2024-10-01 01:53:09.395360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.395404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.407760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198e7818 00:35:29.754 [2024-10-01 01:53:09.408919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.408948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.421210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f7da8 00:35:29.754 [2024-10-01 01:53:09.422553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.422597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.434668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f6890 00:35:29.754 [2024-10-01 01:53:09.436182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.436211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.447872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f0ff8 00:35:29.754 [2024-10-01 01:53:09.449559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.449588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.461320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f96f8 00:35:29.754 [2024-10-01 01:53:09.463194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.463223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.474664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f4f40 00:35:29.754 [2024-10-01 01:53:09.476682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.476726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.486950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f8e88 00:35:29.754 [2024-10-01 01:53:09.488625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.488669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.499050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198f8a50 00:35:29.754 [2024-10-01 01:53:09.500666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.754 [2024-10-01 01:53:09.500694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:29.754 [2024-10-01 01:53:09.512457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198fd208 00:35:29.755 [2024-10-01 01:53:09.514250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.755 [2024-10-01 01:53:09.514294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:29.755 [2024-10-01 01:53:09.525715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356790) with pdu=0x2000198ebb98 00:35:29.755 [2024-10-01 01:53:09.527695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.755 [2024-10-01 01:53:09.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:29.755 20971.00 IOPS, 81.92 MiB/s 00:35:29.755 Latency(us) 00:35:29.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.755 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:29.755 nvme0n1 : 2.01 20974.43 81.93 0.00 0.00 6093.48 3179.71 14757.74 00:35:29.755 =================================================================================================================== 00:35:29.755 Total : 20974.43 81.93 0.00 0.00 6093.48 3179.71 14757.74 00:35:29.755 { 00:35:29.755 "results": [ 00:35:29.755 { 00:35:29.755 "job": "nvme0n1", 00:35:29.755 "core_mask": "0x2", 00:35:29.755 "workload": "randwrite", 00:35:29.755 "status": "finished", 00:35:29.755 "queue_depth": 128, 00:35:29.755 "io_size": 4096, 00:35:29.755 "runtime": 2.005776, 00:35:29.755 "iops": 20974.425858121744, 00:35:29.755 "mibps": 81.93135100828806, 00:35:29.755 "io_failed": 0, 00:35:29.755 "io_timeout": 0, 00:35:29.755 "avg_latency_us": 6093.475130162251, 00:35:29.755 "min_latency_us": 3179.7096296296295, 00:35:29.755 "max_latency_us": 14757.736296296296 00:35:29.755 } 00:35:29.755 ], 00:35:29.755 "core_count": 1 00:35:29.755 } 00:35:29.755 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:29.755 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:29.755 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:29.755 | .driver_specific 00:35:29.755 | .nvme_error 00:35:29.755 | .status_code 00:35:29.755 | .command_transient_transport_error' 00:35:29.755 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:30.013 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:35:30.014 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1057004 00:35:30.014 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1057004 ']' 00:35:30.014 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1057004 00:35:30.014 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:30.014 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:30.014 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1057004 00:35:30.272 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:30.272 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:30.272 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1057004' 00:35:30.272 killing process with pid 1057004 00:35:30.272 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1057004 00:35:30.272 Received shutdown signal, test time was about 2.000000 seconds 00:35:30.272 00:35:30.272 Latency(us) 00:35:30.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.272 =================================================================================================================== 00:35:30.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.272 01:53:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1057004 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1057517 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1057517 /var/tmp/bperf.sock 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1057517 ']' 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:30.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.272 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.530 [2024-10-01 01:53:10.158482] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:30.530 [2024-10-01 01:53:10.158579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057517 ] 00:35:30.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.530 Zero copy mechanism will not be used. 00:35:30.530 [2024-10-01 01:53:10.227581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.530 [2024-10-01 01:53:10.320362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.788 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:30.788 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:30.789 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:30.789 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:31.047 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:31.047 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.047 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.047 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.047 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.047 01:53:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:31.616 nvme0n1 00:35:31.616 01:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:31.616 01:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.616 01:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.616 01:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.616 01:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:31.616 01:53:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:31.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:31.616 Zero copy mechanism will not be used. 00:35:31.616 Running I/O for 2 seconds... 00:35:31.616 [2024-10-01 01:53:11.324651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.325056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.325094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.333354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.333747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.333783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.343145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.343506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.343551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.352947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.353289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.353350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.362815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.363177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.363208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.371613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.371966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.372016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.379646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.379991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.380038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.387985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.388417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.388451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.396342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.396688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.396723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.405136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.405503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.405539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.412915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.413256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.413304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.420935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.421276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.421306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.429386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.429736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.429771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.437412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.437744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.437777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.445313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.445659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.445691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.453566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.453897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.453929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.461341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.461649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.461681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.616 [2024-10-01 01:53:11.468562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.616 [2024-10-01 01:53:11.468869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.616 [2024-10-01 01:53:11.468900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.878 [2024-10-01 01:53:11.475639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.878 [2024-10-01 01:53:11.475945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.878 [2024-10-01 01:53:11.475980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.878 [2024-10-01 01:53:11.482813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.878 [2024-10-01 01:53:11.482940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.878 [2024-10-01 01:53:11.482973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.878 [2024-10-01 01:53:11.490209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.878 [2024-10-01 01:53:11.490531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.878 [2024-10-01 01:53:11.490561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.878 [2024-10-01 01:53:11.498225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.878 [2024-10-01 01:53:11.498540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.878 [2024-10-01 01:53:11.498570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.878 [2024-10-01 01:53:11.506592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.506718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.506744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.515625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.515929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.515962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.524404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.524717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.524747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.532265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.532383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.532409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.541304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.541618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.541650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.550150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.550498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.550527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.558104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.558460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.558509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.566296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.566601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.566639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.573885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.574216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.574246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.581188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.581515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.581545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.588564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.588888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.588918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.596315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.596699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.596728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.604572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.604883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.604912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.612472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.612851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.612880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.619880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.620316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.620360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.627554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.627853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.627885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.635181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.635558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.635587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.642872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.643223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.643254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.650113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.650441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.650471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.657813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.658155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.658185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.665630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.665946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.665975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.672861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.673204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.673249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.679988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.680352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.680381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.686903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.687258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.694477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.694783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.694812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.702374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.879 [2024-10-01 01:53:11.702677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.879 [2024-10-01 01:53:11.702709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.879 [2024-10-01 01:53:11.710408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.880 [2024-10-01 01:53:11.710710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.880 [2024-10-01 01:53:11.710738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.880 [2024-10-01 01:53:11.718408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.880 [2024-10-01 01:53:11.718714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.880 [2024-10-01 01:53:11.718743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.880 [2024-10-01 01:53:11.727322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:31.880 [2024-10-01 01:53:11.727730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.880 [2024-10-01 01:53:11.727759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.736674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.141 [2024-10-01 01:53:11.737016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.141 [2024-10-01 01:53:11.737049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.745093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.141 [2024-10-01 01:53:11.745429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.141 [2024-10-01 01:53:11.745458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.754333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.141 [2024-10-01 01:53:11.754645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.141 [2024-10-01 01:53:11.754674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.763050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.141 [2024-10-01 01:53:11.763378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.141 [2024-10-01 01:53:11.763408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.771818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.141 [2024-10-01 01:53:11.772145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.141 [2024-10-01 01:53:11.772181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.779660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.141 [2024-10-01 01:53:11.779949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.141 [2024-10-01 01:53:11.779982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.141 [2024-10-01 01:53:11.788441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.788745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.788775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.796793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.797117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.797148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.804793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.804945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.804972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.813453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.813817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.813848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.821524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.821844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.821877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.828644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.828943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.828974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.835842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.836168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.836204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.843263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.843587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.843630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.850302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.850589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.850618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.857324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.857610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.857639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.864272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.864629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.864658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.871157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.871465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.871498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.878100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.878425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.878456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.885528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.885828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.885859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.892548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.892831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.892876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.899497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.899776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.899805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.906266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.906590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.906621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.913378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.913663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.913709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.920140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.920445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.920478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.927577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.927859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.927894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.934586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.934845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.934875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.941367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.941647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.941676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.948144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.948448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.955249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.955602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.955633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.961845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.962145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.962182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.968645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.968907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.142 [2024-10-01 01:53:11.968937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.142 [2024-10-01 01:53:11.975408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.142 [2024-10-01 01:53:11.975699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.143 [2024-10-01 01:53:11.975728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.143 [2024-10-01 01:53:11.982484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.143 [2024-10-01 01:53:11.982744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.143 [2024-10-01 01:53:11.982797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.143 [2024-10-01 01:53:11.989033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.143 [2024-10-01 01:53:11.989337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.143 [2024-10-01 01:53:11.989383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.402 [2024-10-01 01:53:11.996306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.402 [2024-10-01 01:53:11.996582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.402 [2024-10-01 01:53:11.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.402 [2024-10-01 01:53:12.002978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.402 [2024-10-01 01:53:12.003271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.402 [2024-10-01 01:53:12.003300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.402 [2024-10-01 01:53:12.009976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.402 [2024-10-01 01:53:12.010273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.402 [2024-10-01 01:53:12.010317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.402 [2024-10-01 01:53:12.016593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.402 [2024-10-01 01:53:12.016876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.402 [2024-10-01 01:53:12.016907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.402 [2024-10-01 01:53:12.023904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.402 [2024-10-01 01:53:12.024217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.024247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.030860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.031176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.031206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.038127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.038426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.045392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.045657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.045685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.052395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.052653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.059216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.059511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.059539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.066210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.066512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.066545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.073666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.073967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.074007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.080788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.081114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.081150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.087645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.087930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.087960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.094528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.094769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.094799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.101712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.101971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.102028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.108772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.109122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.109170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.115933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.116219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.116250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.122520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.122752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.122780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.129268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.129546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.129589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.135850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.136359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.136387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.142615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.142862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.142891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.149079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.149341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.155607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.155906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.155951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.162682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.162938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.162968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.169740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.170024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.170062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.176780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.177039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.177069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.183649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.183906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.183936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.190363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.190597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.190626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.197337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.197616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.197655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.403 [2024-10-01 01:53:12.204451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.403 [2024-10-01 01:53:12.204686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.403 [2024-10-01 01:53:12.204714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.211243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.211507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.211535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.217580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.217827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.217855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.224588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.224846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.224876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.230783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.231043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.231074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.237428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.237677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.237704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.244116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.244425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.244455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.404 [2024-10-01 01:53:12.250631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.404 [2024-10-01 01:53:12.250889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.404 [2024-10-01 01:53:12.250923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.258127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.258399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.258450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.266525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.266830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.266860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.275108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.275452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.275483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.283611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.283898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.283926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.292049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.292373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.292402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.299519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.299818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.299851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.664 [2024-10-01 01:53:12.306373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.664 [2024-10-01 01:53:12.306666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.664 [2024-10-01 01:53:12.306697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.664 4069.00 IOPS, 508.62 MiB/s [2024-10-01 01:53:12.314845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.315164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.315195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.322208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.322550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.322581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.329230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.329495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.329533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.336545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.336855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.336884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.345125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.345481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.345511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.353425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.353710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.361851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.362215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.362246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.370443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.370751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.370782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.378611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.378937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.378967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.387025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.387350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.387381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.394152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.394480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.394511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.401464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.401734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.401790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.408774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.409093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.409124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.416270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.416550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.416580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.424691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.424977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.425032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.432885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.433296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.433329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.441265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.441722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.448260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.448633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.448667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.456583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.456902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.456935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.463828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.464114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.464146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.470843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.471143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.471178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.478183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.478428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.478458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.485410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.485681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.485710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.492166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.492450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.492480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.499438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.499685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.499715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.506757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.507037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.665 [2024-10-01 01:53:12.507068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.665 [2024-10-01 01:53:12.513277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.665 [2024-10-01 01:53:12.513572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.666 [2024-10-01 01:53:12.513613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.520396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.520690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.520721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.527700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.527981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.535195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.535457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.535487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.542558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.542853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.542884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.549883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.550162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.550195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.557105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.557378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.557409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.564469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.564786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.571877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.572179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.572211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.579207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.579497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.586184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.586452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.586490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.592877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.593162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.593195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.599581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.599826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.599855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.606750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.607035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.607068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.613465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.613706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.613746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.620061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.620330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.620359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.626746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.626989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.627044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.927 [2024-10-01 01:53:12.633962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.927 [2024-10-01 01:53:12.634237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.927 [2024-10-01 01:53:12.634269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.640880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.641147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.641176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.647842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.648125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.648157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.655120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.655387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.655418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.661651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.661897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.661927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.668636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.668905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.675503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.675746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.675776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.682536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.682800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.682831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.689650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.689885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.689941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.696772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.697063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.697095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.703820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.704066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.704097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.710488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.710734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.716996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.717210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.717239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.723606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.723870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.723900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.729602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.729868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.729898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.736462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.736782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.736813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.742965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.743244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.743275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.749374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.749671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.749701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.755794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.756134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.756165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.762270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.762586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.762623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.769379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.769700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.769731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:32.928 [2024-10-01 01:53:12.775477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:32.928 [2024-10-01 01:53:12.775747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.928 [2024-10-01 01:53:12.775777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.781480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.781748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.781777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.787631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.787909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.793668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.793958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.793988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.800163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.800432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.800464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.806818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.807102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.807133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.813534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.813823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.813854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.820288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.820595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.820625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.826942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.827267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.827298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.833841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.834138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.834171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.840833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.841135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.841171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.847759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.848076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.848107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.854564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.854899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.854930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.861469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.861720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.861750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.868379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.868655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.868687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.874597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.874897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.874927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.882382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.882683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.882719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.890221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.890574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.890604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.898393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.898782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.898811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.906713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.907074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.907105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.914888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.915192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.915227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.922178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.922468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.922498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.928700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.928992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.929032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.935445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.935680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.935711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.941909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.942219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.190 [2024-10-01 01:53:12.942264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.190 [2024-10-01 01:53:12.948496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.190 [2024-10-01 01:53:12.948725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.948755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.954666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.954910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.954940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.960799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.961146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.967283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.967585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.967616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.973743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.974062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.974094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.980517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.980764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.980794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.986795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.987054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.987091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.992877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.993144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.993175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:12.999462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:12.999777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:12.999808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:13.006182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:13.006454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:13.006484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:13.012720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:13.012995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:13.013033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:13.019453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:13.019724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:13.019754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:13.025858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:13.026122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:13.026153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:13.031869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:13.032126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:13.032157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.191 [2024-10-01 01:53:13.038690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.191 [2024-10-01 01:53:13.038975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.191 [2024-10-01 01:53:13.039027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.451 [2024-10-01 01:53:13.045483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.451 [2024-10-01 01:53:13.045721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.451 [2024-10-01 01:53:13.045752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.451 [2024-10-01 01:53:13.052134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.451 [2024-10-01 01:53:13.052438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.451 [2024-10-01 01:53:13.052476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.451 [2024-10-01 01:53:13.058854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.451 [2024-10-01 01:53:13.059154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.451 [2024-10-01 01:53:13.059185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.451 [2024-10-01 01:53:13.065217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.451 [2024-10-01 01:53:13.065468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.451 [2024-10-01 01:53:13.065498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.451 [2024-10-01 01:53:13.071437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.451 [2024-10-01 01:53:13.071712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.451 [2024-10-01 01:53:13.071746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.451 [2024-10-01 01:53:13.078235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.078497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.078546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.084776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.085043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.085074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.090779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.091195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.091227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.097722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.098028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.104544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.104811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.104841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.111433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.111669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.111699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.117473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.117750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.117781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.123890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.124191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.124222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.130424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.130657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.130687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.136904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.137184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.137214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.143198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.143443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.143496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.149696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.149994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.150061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.156064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.156315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.156346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.162449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.162686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.162716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.169378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.169609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.169639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.175878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.176166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.176197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.182372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.182609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.182638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.188728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.189016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.189047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.195605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.195851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.195886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.202415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.202675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.202705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.208745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.209030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.209061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.215024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.215282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.215326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.221446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.221720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.221758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.228417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.228648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.228688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.235607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.235870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.241923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.242228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.452 [2024-10-01 01:53:13.242260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.452 [2024-10-01 01:53:13.248068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.452 [2024-10-01 01:53:13.248372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.248417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.254841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.255131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.255162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.260707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.260946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.260975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.266645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.266930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.266961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.273385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.273690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.273729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.280694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.281070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.281101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.287473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.287744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.287774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.293842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.294112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.294143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:33.453 [2024-10-01 01:53:13.300316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.453 [2024-10-01 01:53:13.300602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.453 [2024-10-01 01:53:13.300633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:33.711 [2024-10-01 01:53:13.306577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.711 [2024-10-01 01:53:13.306838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.711 [2024-10-01 01:53:13.306869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:33.711 4288.50 IOPS, 536.06 MiB/s [2024-10-01 01:53:13.314112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2356ad0) with pdu=0x2000198fef90 00:35:33.711 [2024-10-01 01:53:13.314365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.711 [2024-10-01 01:53:13.314395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:33.711 00:35:33.712 Latency(us) 00:35:33.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.712 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:33.712 nvme0n1 : 2.00 4287.52 535.94 0.00 0.00 3723.35 2669.99 14272.28 00:35:33.712 =================================================================================================================== 00:35:33.712 Total : 4287.52 535.94 0.00 0.00 3723.35 2669.99 14272.28 00:35:33.712 { 00:35:33.712 "results": [ 00:35:33.712 { 00:35:33.712 "job": "nvme0n1", 00:35:33.712 "core_mask": "0x2", 00:35:33.712 "workload": "randwrite", 00:35:33.712 "status": "finished", 00:35:33.712 "queue_depth": 16, 00:35:33.712 "io_size": 131072, 00:35:33.712 "runtime": 2.004189, 00:35:33.712 "iops": 4287.519789800263, 00:35:33.712 "mibps": 535.9399737250329, 00:35:33.712 "io_failed": 0, 00:35:33.712 "io_timeout": 0, 00:35:33.712 "avg_latency_us": 3723.3501708108665, 00:35:33.712 "min_latency_us": 2669.9851851851854, 00:35:33.712 "max_latency_us": 14272.284444444444 00:35:33.712 } 00:35:33.712 ], 00:35:33.712 "core_count": 1 00:35:33.712 } 00:35:33.712 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:33.712 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:33.712 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:33.712 | .driver_specific 00:35:33.712 | .nvme_error 00:35:33.712 | .status_code 00:35:33.712 | .command_transient_transport_error' 00:35:33.712 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 277 > 0 )) 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1057517 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1057517 ']' 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1057517 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1057517 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1057517' 00:35:33.972 killing process with pid 1057517 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1057517 00:35:33.972 Received shutdown signal, test time was about 2.000000 seconds 00:35:33.972 00:35:33.972 Latency(us) 00:35:33.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.972 =================================================================================================================== 00:35:33.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:33.972 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1057517 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1056135 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1056135 ']' 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1056135 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056135 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056135' 00:35:34.233 killing process with pid 1056135 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1056135 00:35:34.233 01:53:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1056135 00:35:34.492 00:35:34.492 real 0m15.821s 00:35:34.492 user 0m31.183s 00:35:34.492 sys 0m4.488s 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:34.492 ************************************ 00:35:34.492 END TEST nvmf_digest_error 00:35:34.492 ************************************ 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.492 rmmod nvme_tcp 00:35:34.492 rmmod nvme_fabrics 00:35:34.492 rmmod nvme_keyring 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 1056135 ']' 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 1056135 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1056135 ']' 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1056135 00:35:34.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1056135) - No such process 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1056135 is not found' 00:35:34.492 Process with pid 1056135 is not found 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.492 01:53:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.030 00:35:37.030 real 0m35.988s 00:35:37.030 user 1m3.603s 00:35:37.030 sys 0m10.219s 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.030 ************************************ 00:35:37.030 END TEST nvmf_digest 00:35:37.030 ************************************ 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.030 ************************************ 00:35:37.030 START TEST nvmf_bdevperf 00:35:37.030 ************************************ 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:37.030 * Looking for test storage... 00:35:37.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.030 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:37.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.031 --rc genhtml_branch_coverage=1 00:35:37.031 --rc genhtml_function_coverage=1 00:35:37.031 --rc genhtml_legend=1 00:35:37.031 --rc geninfo_all_blocks=1 00:35:37.031 --rc geninfo_unexecuted_blocks=1 00:35:37.031 00:35:37.031 ' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:37.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.031 --rc genhtml_branch_coverage=1 00:35:37.031 --rc genhtml_function_coverage=1 00:35:37.031 --rc genhtml_legend=1 00:35:37.031 --rc geninfo_all_blocks=1 00:35:37.031 --rc geninfo_unexecuted_blocks=1 00:35:37.031 00:35:37.031 ' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:37.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.031 --rc genhtml_branch_coverage=1 00:35:37.031 --rc genhtml_function_coverage=1 00:35:37.031 --rc genhtml_legend=1 00:35:37.031 --rc geninfo_all_blocks=1 00:35:37.031 --rc geninfo_unexecuted_blocks=1 00:35:37.031 00:35:37.031 ' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:37.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.031 --rc genhtml_branch_coverage=1 00:35:37.031 --rc genhtml_function_coverage=1 00:35:37.031 --rc genhtml_legend=1 00:35:37.031 --rc geninfo_all_blocks=1 00:35:37.031 --rc geninfo_unexecuted_blocks=1 00:35:37.031 00:35:37.031 ' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:37.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.031 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.032 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:37.032 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:37.032 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.032 01:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:38.934 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:38.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:38.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:38.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:38.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:38.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:35:38.935 00:35:38.935 --- 10.0.0.2 ping statistics --- 00:35:38.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.935 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:35:38.935 00:35:38.935 --- 10.0.0.1 ping statistics --- 00:35:38.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.935 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=1059882 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 1059882 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1059882 ']' 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:38.935 01:53:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.194 [2024-10-01 01:53:18.802332] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:39.194 [2024-10-01 01:53:18.802435] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.194 [2024-10-01 01:53:18.876620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:39.194 [2024-10-01 01:53:18.967152] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.194 [2024-10-01 01:53:18.967205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.194 [2024-10-01 01:53:18.967220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.194 [2024-10-01 01:53:18.967232] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.194 [2024-10-01 01:53:18.967242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.194 [2024-10-01 01:53:18.967328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:39.194 [2024-10-01 01:53:18.967389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:39.194 [2024-10-01 01:53:18.967392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.452 [2024-10-01 01:53:19.099527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.452 Malloc0 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:39.452 [2024-10-01 01:53:19.167184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.452 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:39.453 { 00:35:39.453 "params": { 00:35:39.453 "name": "Nvme$subsystem", 00:35:39.453 "trtype": "$TEST_TRANSPORT", 00:35:39.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.453 "adrfam": "ipv4", 00:35:39.453 "trsvcid": "$NVMF_PORT", 00:35:39.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.453 "hdgst": ${hdgst:-false}, 00:35:39.453 "ddgst": ${ddgst:-false} 00:35:39.453 }, 00:35:39.453 "method": "bdev_nvme_attach_controller" 00:35:39.453 } 00:35:39.453 EOF 00:35:39.453 )") 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:39.453 01:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:39.453 "params": { 00:35:39.453 "name": "Nvme1", 00:35:39.453 "trtype": "tcp", 00:35:39.453 "traddr": "10.0.0.2", 00:35:39.453 "adrfam": "ipv4", 00:35:39.453 "trsvcid": "4420", 00:35:39.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.453 "hdgst": false, 00:35:39.453 "ddgst": false 00:35:39.453 }, 00:35:39.453 "method": "bdev_nvme_attach_controller" 00:35:39.453 }' 00:35:39.453 [2024-10-01 01:53:19.215429] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:39.453 [2024-10-01 01:53:19.215517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060026 ] 00:35:39.453 [2024-10-01 01:53:19.277687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.711 [2024-10-01 01:53:19.364648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.969 Running I/O for 1 seconds... 00:35:40.925 8382.00 IOPS, 32.74 MiB/s 00:35:40.925 Latency(us) 00:35:40.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.925 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:40.925 Verification LBA range: start 0x0 length 0x4000 00:35:40.925 Nvme1n1 : 1.01 8406.59 32.84 0.00 0.00 15161.58 1820.44 14951.92 00:35:40.925 =================================================================================================================== 00:35:40.925 Total : 8406.59 32.84 0.00 0.00 15161.58 1820.44 14951.92 00:35:41.213 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1060176 00:35:41.213 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:41.213 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:41.214 { 00:35:41.214 "params": { 00:35:41.214 "name": "Nvme$subsystem", 00:35:41.214 "trtype": "$TEST_TRANSPORT", 00:35:41.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.214 "adrfam": "ipv4", 00:35:41.214 "trsvcid": "$NVMF_PORT", 00:35:41.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.214 "hdgst": ${hdgst:-false}, 00:35:41.214 "ddgst": ${ddgst:-false} 00:35:41.214 }, 00:35:41.214 "method": "bdev_nvme_attach_controller" 00:35:41.214 } 00:35:41.214 EOF 00:35:41.214 )") 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:35:41.214 01:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:41.214 "params": { 00:35:41.214 "name": "Nvme1", 00:35:41.214 "trtype": "tcp", 00:35:41.214 "traddr": "10.0.0.2", 00:35:41.214 "adrfam": "ipv4", 00:35:41.214 "trsvcid": "4420", 00:35:41.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:41.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:41.214 "hdgst": false, 00:35:41.214 "ddgst": false 00:35:41.214 }, 00:35:41.214 "method": "bdev_nvme_attach_controller" 00:35:41.214 }' 00:35:41.214 [2024-10-01 01:53:20.915860] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:41.214 [2024-10-01 01:53:20.915949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060176 ] 00:35:41.214 [2024-10-01 01:53:20.976382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.472 [2024-10-01 01:53:21.061237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.472 Running I/O for 15 seconds... 00:35:44.047 8256.00 IOPS, 32.25 MiB/s 8291.00 IOPS, 32.39 MiB/s 01:53:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1059882 00:35:44.047 01:53:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:44.047 [2024-10-01 01:53:23.884707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.884814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.884832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.884852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.884870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.884889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.884906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.884926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.884943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.884962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.884980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.047 [2024-10-01 01:53:23.885724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.047 [2024-10-01 01:53:23.885741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.885972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.885993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.886971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.886994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.887020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.887037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.048 [2024-10-01 01:53:23.887069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.048 [2024-10-01 01:53:23.887084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.887604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.887637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.887670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.887708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.887742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.887775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.887971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.887995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:44.049 [2024-10-01 01:53:23.888368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.049 [2024-10-01 01:53:23.888401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.049 [2024-10-01 01:53:23.888418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.888989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.050 [2024-10-01 01:53:23.889214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3d980 is same with the state(6) to be set 00:35:44.050 [2024-10-01 01:53:23.889244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:44.050 [2024-10-01 01:53:23.889257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:44.050 [2024-10-01 01:53:23.889269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:35:44.050 [2024-10-01 01:53:23.889301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889378] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d3d980 was disconnected and freed. reset controller. 00:35:44.050 [2024-10-01 01:53:23.889457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:44.050 [2024-10-01 01:53:23.889481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:44.050 [2024-10-01 01:53:23.889513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:44.050 [2024-10-01 01:53:23.889544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:44.050 [2024-10-01 01:53:23.889579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:44.050 [2024-10-01 01:53:23.889594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.050 [2024-10-01 01:53:23.893411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.050 [2024-10-01 01:53:23.893464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.050 [2024-10-01 01:53:23.894121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.050 [2024-10-01 01:53:23.894152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.050 [2024-10-01 01:53:23.894170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.050 [2024-10-01 01:53:23.894414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.050 [2024-10-01 01:53:23.894607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.050 [2024-10-01 01:53:23.894627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.050 [2024-10-01 01:53:23.894643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.050 [2024-10-01 01:53:23.897922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.907636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.908085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.908119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.908147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.908385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.908627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.908651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.908667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.912232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.921670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.922103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.922136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.922160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.922398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.922640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.922665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.922680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.926248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.935520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.936026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.936054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.936085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.936343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.936585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.936609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.936626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.940187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.949430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.949845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.949873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.949888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.950136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.950379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.950403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.950418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.953973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.963429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.963830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.963871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.963898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.964160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.964406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.964431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.964446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.968017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.977318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.977726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.977754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.977775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.978020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.978265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.978289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.978305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.981855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:23.991298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:23.991723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:23.991756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:23.991775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:23.992024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:23.992267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:23.992291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:23.992307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:23.995855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:24.005296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.311 [2024-10-01 01:53:24.005719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.311 [2024-10-01 01:53:24.005751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.311 [2024-10-01 01:53:24.005776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.311 [2024-10-01 01:53:24.006025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.311 [2024-10-01 01:53:24.006268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.311 [2024-10-01 01:53:24.006292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.311 [2024-10-01 01:53:24.006308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.311 [2024-10-01 01:53:24.009862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.311 [2024-10-01 01:53:24.019301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.019730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.019757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.019773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.020012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.020268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.020298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.020315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.023868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.033315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.033752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.033784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.033802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.034051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.034294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.034318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.034334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.037901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.047174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.047601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.047634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.047653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.047890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.048144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.048170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.048186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.051738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.061178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.061569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.061608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.061626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.061867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.062121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.062147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.062163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.065716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.075158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.075561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.075598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.075616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.075857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.076112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.076137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.076153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.079708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.089146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.089558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.089590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.089614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.089851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.090104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.090129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.090145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.093695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.103137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.103557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.103588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.103617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.103854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.104108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.104133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.104149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.107701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.117143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.117561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.117593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.117612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.117855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.118118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.118143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.118159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.121711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.131160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.131581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.131613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.131631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.131867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.132120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.132146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.132162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.135710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.145293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.312 [2024-10-01 01:53:24.145698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.312 [2024-10-01 01:53:24.145733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.312 [2024-10-01 01:53:24.145752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.312 [2024-10-01 01:53:24.145991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.312 [2024-10-01 01:53:24.146272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.312 [2024-10-01 01:53:24.146304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.312 [2024-10-01 01:53:24.146322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.312 [2024-10-01 01:53:24.149993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.312 [2024-10-01 01:53:24.159343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.313 [2024-10-01 01:53:24.159738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.313 [2024-10-01 01:53:24.159772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.313 [2024-10-01 01:53:24.159791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.313 [2024-10-01 01:53:24.160041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.313 [2024-10-01 01:53:24.160283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.313 [2024-10-01 01:53:24.160310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.313 [2024-10-01 01:53:24.160333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.572 [2024-10-01 01:53:24.163891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.572 [2024-10-01 01:53:24.173342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.572 [2024-10-01 01:53:24.173864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.173915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.173934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.174185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.174427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.174453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.174469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.178027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.187267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.187688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.187722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.187741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.187979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.188236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.188262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.188278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.191834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.201292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.201715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.201748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.201767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.202018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.202262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.202288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.202305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.205860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.215317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.215706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.215744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.215763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.216014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.216257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.216282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.216299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.219855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.229320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.229739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.229771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.229791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.230068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.230312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.230337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.230353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.233912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.243176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.243584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.243620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.243639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.243878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.244135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.244161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.244177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.247741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 7288.33 IOPS, 28.47 MiB/s [2024-10-01 01:53:24.257048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.257444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.257477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.257497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.257735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.257984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.258023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.258041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.261597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.271048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.271472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.271505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.271524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.271763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.272020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.272046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.272063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.275617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.285065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.285505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.285537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.285555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.285793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.286048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.286074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.573 [2024-10-01 01:53:24.286090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.573 [2024-10-01 01:53:24.289648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.573 [2024-10-01 01:53:24.298885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.573 [2024-10-01 01:53:24.299308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.573 [2024-10-01 01:53:24.299340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.573 [2024-10-01 01:53:24.299359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.573 [2024-10-01 01:53:24.299597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.573 [2024-10-01 01:53:24.299838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.573 [2024-10-01 01:53:24.299863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.299879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.303454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.312899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.313343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.313377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.313395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.313632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.313875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.313900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.313916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.317483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.326726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.327125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.327159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.327178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.327416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.327657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.327683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.327699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.331268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.340733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.341134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.341167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.341185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.341423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.341664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.341689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.341705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.345275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.354723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.355142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.355181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.355200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.355439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.355680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.355705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.355721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.359285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.368728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.369151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.369184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.369202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.369440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.369681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.369706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.369722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.373290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.382731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.383148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.383180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.383199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.383436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.383678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.383703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.383719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.387278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.396803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.397206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.397239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.397259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.397496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.397744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.397771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.397787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.401355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.410805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.411210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.411243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.411261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.411499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.574 [2024-10-01 01:53:24.411739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.574 [2024-10-01 01:53:24.411765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.574 [2024-10-01 01:53:24.411781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.574 [2024-10-01 01:53:24.415350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.574 [2024-10-01 01:53:24.424840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.574 [2024-10-01 01:53:24.425281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.574 [2024-10-01 01:53:24.425314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.574 [2024-10-01 01:53:24.425334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.574 [2024-10-01 01:53:24.425571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.425815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.425841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.425857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.429436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.438708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.439152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.439186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.439205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.439443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.439685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.439710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.439727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.443297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.452553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.452978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.453019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.453040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.453279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.453521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.453547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.453563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.457122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.466577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.466988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.467031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.467050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.467288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.467532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.467557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.467573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.471137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.480594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.481008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.481042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.481061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.481299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.481542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.481567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.481584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.485150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.494415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.494826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.494858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.494883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.495136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.495378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.495404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.495419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.498972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.508416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.508948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.509009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.509031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.509268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.509509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.509534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.509550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.513112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.522346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.522733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.522765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.522783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.523034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.834 [2024-10-01 01:53:24.523275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.834 [2024-10-01 01:53:24.523301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.834 [2024-10-01 01:53:24.523317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.834 [2024-10-01 01:53:24.526879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.834 [2024-10-01 01:53:24.536329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.834 [2024-10-01 01:53:24.536751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.834 [2024-10-01 01:53:24.536783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.834 [2024-10-01 01:53:24.536801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.834 [2024-10-01 01:53:24.537053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.537295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.537330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.537347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.540915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.550165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.550585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.550619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.550638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.550877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.551134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.551160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.551176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.554730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.564178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.564595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.564628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.564647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.564885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.565139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.565163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.565178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.568741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.578004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.578529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.578580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.578599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.578836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.579090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.579116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.579132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.582694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.591945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.592368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.592401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.592420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.592657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.592898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.592923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.592939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.596510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.605786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.606190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.606223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.606241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.606479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.606721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.606746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.606762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.610333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.619809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.620203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.620235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.620254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.620492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.620734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.620759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.620774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.624346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.633830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.634225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.634257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.634276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.634519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.634762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.634786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.634803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.638394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.647922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.648365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.648399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.648418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.648657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.648899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.648924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.648940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.652515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.661766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.662166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.662198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.835 [2024-10-01 01:53:24.662216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.835 [2024-10-01 01:53:24.662454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.835 [2024-10-01 01:53:24.662696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.835 [2024-10-01 01:53:24.662721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.835 [2024-10-01 01:53:24.662737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.835 [2024-10-01 01:53:24.666309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:44.835 [2024-10-01 01:53:24.675771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:44.835 [2024-10-01 01:53:24.676179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:44.835 [2024-10-01 01:53:24.676211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:44.836 [2024-10-01 01:53:24.676230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:44.836 [2024-10-01 01:53:24.676467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:44.836 [2024-10-01 01:53:24.676710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:44.836 [2024-10-01 01:53:24.676736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:44.836 [2024-10-01 01:53:24.676759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:44.836 [2024-10-01 01:53:24.680350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.095 [2024-10-01 01:53:24.689604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.095 [2024-10-01 01:53:24.690008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.095 [2024-10-01 01:53:24.690040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.095 [2024-10-01 01:53:24.690059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.095 [2024-10-01 01:53:24.690296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.095 [2024-10-01 01:53:24.690539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.095 [2024-10-01 01:53:24.690563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.095 [2024-10-01 01:53:24.690580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.095 [2024-10-01 01:53:24.694155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.095 [2024-10-01 01:53:24.703620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.095 [2024-10-01 01:53:24.704042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.095 [2024-10-01 01:53:24.704075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.095 [2024-10-01 01:53:24.704093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.095 [2024-10-01 01:53:24.704331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.095 [2024-10-01 01:53:24.704573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.095 [2024-10-01 01:53:24.704597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.095 [2024-10-01 01:53:24.704613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.095 [2024-10-01 01:53:24.708186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.095 [2024-10-01 01:53:24.717661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.095 [2024-10-01 01:53:24.718072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.095 [2024-10-01 01:53:24.718106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.095 [2024-10-01 01:53:24.718125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.095 [2024-10-01 01:53:24.718363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.095 [2024-10-01 01:53:24.718606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.095 [2024-10-01 01:53:24.718631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.095 [2024-10-01 01:53:24.718648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.095 [2024-10-01 01:53:24.722206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.095 [2024-10-01 01:53:24.731674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.095 [2024-10-01 01:53:24.732097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.095 [2024-10-01 01:53:24.732147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.095 [2024-10-01 01:53:24.732167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.732405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.732647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.732672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.732688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.736260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.745518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.745937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.745969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.745988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.746236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.746477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.746501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.746517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.750080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.759531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.759925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.759957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.759975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.760226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.760468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.760494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.760510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.764072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.773514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.773926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.773958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.773977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.774227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.774476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.774502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.774518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.778085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.787525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.787940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.787974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.787993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.788246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.788489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.788515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.788531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.792096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.801538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.801921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.801954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.801972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.802225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.802467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.802492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.802508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.806073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.815525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.815938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.815971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.815990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.816241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.816482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.816507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.816524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.820096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.829543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.830026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.830059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.830077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.830316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.830556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.830582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.830598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.834167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.843422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.843921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.843972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.843990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.844241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.844483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.844509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.844525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.848092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.857334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.857731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.857763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.096 [2024-10-01 01:53:24.857782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.096 [2024-10-01 01:53:24.858030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.096 [2024-10-01 01:53:24.858272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.096 [2024-10-01 01:53:24.858297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.096 [2024-10-01 01:53:24.858314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.096 [2024-10-01 01:53:24.861869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.096 [2024-10-01 01:53:24.871321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.096 [2024-10-01 01:53:24.871737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.096 [2024-10-01 01:53:24.871769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.097 [2024-10-01 01:53:24.871794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.097 [2024-10-01 01:53:24.872047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.097 [2024-10-01 01:53:24.872289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.097 [2024-10-01 01:53:24.872315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.097 [2024-10-01 01:53:24.872331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.097 [2024-10-01 01:53:24.875886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.097 [2024-10-01 01:53:24.885338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.097 [2024-10-01 01:53:24.885753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.097 [2024-10-01 01:53:24.885785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.097 [2024-10-01 01:53:24.885805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.097 [2024-10-01 01:53:24.886056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.097 [2024-10-01 01:53:24.886297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.097 [2024-10-01 01:53:24.886321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.097 [2024-10-01 01:53:24.886336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.097 [2024-10-01 01:53:24.889892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.097 [2024-10-01 01:53:24.899399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.097 [2024-10-01 01:53:24.899792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.097 [2024-10-01 01:53:24.899825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.097 [2024-10-01 01:53:24.899844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.097 [2024-10-01 01:53:24.900093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.097 [2024-10-01 01:53:24.900336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.097 [2024-10-01 01:53:24.900361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.097 [2024-10-01 01:53:24.900378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.097 [2024-10-01 01:53:24.903928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.097 [2024-10-01 01:53:24.913445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.097 [2024-10-01 01:53:24.913843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.097 [2024-10-01 01:53:24.913877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.097 [2024-10-01 01:53:24.913895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.097 [2024-10-01 01:53:24.914145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.097 [2024-10-01 01:53:24.914388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.097 [2024-10-01 01:53:24.914420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.097 [2024-10-01 01:53:24.914437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.097 [2024-10-01 01:53:24.917991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.097 [2024-10-01 01:53:24.927443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.097 [2024-10-01 01:53:24.927863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.097 [2024-10-01 01:53:24.927896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.097 [2024-10-01 01:53:24.927914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.097 [2024-10-01 01:53:24.928166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.097 [2024-10-01 01:53:24.928409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.097 [2024-10-01 01:53:24.928433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.097 [2024-10-01 01:53:24.928449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.097 [2024-10-01 01:53:24.932013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.097 [2024-10-01 01:53:24.941466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.097 [2024-10-01 01:53:24.941886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.097 [2024-10-01 01:53:24.941918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.097 [2024-10-01 01:53:24.941937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.097 [2024-10-01 01:53:24.942186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.097 [2024-10-01 01:53:24.942428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.097 [2024-10-01 01:53:24.942453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.097 [2024-10-01 01:53:24.942470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.097 [2024-10-01 01:53:24.946041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.358 [2024-10-01 01:53:24.955500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.358 [2024-10-01 01:53:24.955921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-10-01 01:53:24.955953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.358 [2024-10-01 01:53:24.955972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.358 [2024-10-01 01:53:24.956219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.358 [2024-10-01 01:53:24.956462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.358 [2024-10-01 01:53:24.956487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.358 [2024-10-01 01:53:24.956503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.358 [2024-10-01 01:53:24.960070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.358 [2024-10-01 01:53:24.969523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.358 [2024-10-01 01:53:24.969909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-10-01 01:53:24.969941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.358 [2024-10-01 01:53:24.969960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.358 [2024-10-01 01:53:24.970209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.358 [2024-10-01 01:53:24.970452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.358 [2024-10-01 01:53:24.970476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.358 [2024-10-01 01:53:24.970492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.358 [2024-10-01 01:53:24.974050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.358 [2024-10-01 01:53:24.983499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.358 [2024-10-01 01:53:24.983882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-10-01 01:53:24.983914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.358 [2024-10-01 01:53:24.983932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.358 [2024-10-01 01:53:24.984181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.358 [2024-10-01 01:53:24.984425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.358 [2024-10-01 01:53:24.984449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.358 [2024-10-01 01:53:24.984466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.358 [2024-10-01 01:53:24.988031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.358 [2024-10-01 01:53:24.997505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.358 [2024-10-01 01:53:24.997916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-10-01 01:53:24.997947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.358 [2024-10-01 01:53:24.997966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.358 [2024-10-01 01:53:24.998216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.358 [2024-10-01 01:53:24.998457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.358 [2024-10-01 01:53:24.998483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.358 [2024-10-01 01:53:24.998499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.358 [2024-10-01 01:53:25.002067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.358 [2024-10-01 01:53:25.011343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.358 [2024-10-01 01:53:25.011734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.358 [2024-10-01 01:53:25.011767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.358 [2024-10-01 01:53:25.011792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.358 [2024-10-01 01:53:25.012043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.358 [2024-10-01 01:53:25.012285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.358 [2024-10-01 01:53:25.012311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.358 [2024-10-01 01:53:25.012327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.358 [2024-10-01 01:53:25.015885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.358 [2024-10-01 01:53:25.025360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.358 [2024-10-01 01:53:25.025774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.025806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.025825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.026075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.026318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.026343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.026359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.029916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.039395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.039806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.039839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.039857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.040106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.040350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.040375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.040391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.043946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.053403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.053828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.053861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.053879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.054130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.054372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.054403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.054421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.057975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.067257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.067643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.067675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.067693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.067931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.068185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.068211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.068227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.071785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.081271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.081694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.081727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.081747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.081985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.082239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.082276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.082292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.085851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.095105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.095526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.095559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.095577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.095816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.096071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.096097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.096113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.099667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.109136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.109554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.109587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.109605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.109844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.110098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.110124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.110140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.113700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.123156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.123565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.123598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.123617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.123855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.124112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.124140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.124156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.127717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.137174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.137588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.137621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.137639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.137877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.138135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.138161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.138178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.141762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.359 [2024-10-01 01:53:25.151298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.359 [2024-10-01 01:53:25.151715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.359 [2024-10-01 01:53:25.151751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.359 [2024-10-01 01:53:25.151771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.359 [2024-10-01 01:53:25.152026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.359 [2024-10-01 01:53:25.152270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.359 [2024-10-01 01:53:25.152294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.359 [2024-10-01 01:53:25.152310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.359 [2024-10-01 01:53:25.155875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.360 [2024-10-01 01:53:25.165147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.360 [2024-10-01 01:53:25.165549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-10-01 01:53:25.165582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.360 [2024-10-01 01:53:25.165600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.360 [2024-10-01 01:53:25.165838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.360 [2024-10-01 01:53:25.166089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.360 [2024-10-01 01:53:25.166114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.360 [2024-10-01 01:53:25.166130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.360 [2024-10-01 01:53:25.169680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.360 [2024-10-01 01:53:25.179137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.360 [2024-10-01 01:53:25.179544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-10-01 01:53:25.179577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.360 [2024-10-01 01:53:25.179595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.360 [2024-10-01 01:53:25.179833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.360 [2024-10-01 01:53:25.180086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.360 [2024-10-01 01:53:25.180111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.360 [2024-10-01 01:53:25.180137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.360 [2024-10-01 01:53:25.183690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.360 [2024-10-01 01:53:25.193148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.360 [2024-10-01 01:53:25.193544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-10-01 01:53:25.193577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.360 [2024-10-01 01:53:25.193595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.360 [2024-10-01 01:53:25.193832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.360 [2024-10-01 01:53:25.194086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.360 [2024-10-01 01:53:25.194112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.360 [2024-10-01 01:53:25.194134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.360 [2024-10-01 01:53:25.197688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.360 [2024-10-01 01:53:25.207148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.360 [2024-10-01 01:53:25.207575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.360 [2024-10-01 01:53:25.207607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.360 [2024-10-01 01:53:25.207626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.360 [2024-10-01 01:53:25.207863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.360 [2024-10-01 01:53:25.208119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.360 [2024-10-01 01:53:25.208145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.360 [2024-10-01 01:53:25.208161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 [2024-10-01 01:53:25.211720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.220966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.221360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.621 [2024-10-01 01:53:25.221393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.621 [2024-10-01 01:53:25.221412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.621 [2024-10-01 01:53:25.221650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.621 [2024-10-01 01:53:25.221891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.621 [2024-10-01 01:53:25.221917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.621 [2024-10-01 01:53:25.221932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 [2024-10-01 01:53:25.225506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.234958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.235362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.621 [2024-10-01 01:53:25.235395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.621 [2024-10-01 01:53:25.235413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.621 [2024-10-01 01:53:25.235651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.621 [2024-10-01 01:53:25.235891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.621 [2024-10-01 01:53:25.235916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.621 [2024-10-01 01:53:25.235932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 [2024-10-01 01:53:25.239500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.248973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.249396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.621 [2024-10-01 01:53:25.249439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.621 [2024-10-01 01:53:25.249459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.621 [2024-10-01 01:53:25.249696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.621 [2024-10-01 01:53:25.249938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.621 [2024-10-01 01:53:25.249964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.621 [2024-10-01 01:53:25.249980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 5466.25 IOPS, 21.35 MiB/s [2024-10-01 01:53:25.255273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.262835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.263244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.621 [2024-10-01 01:53:25.263277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.621 [2024-10-01 01:53:25.263296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.621 [2024-10-01 01:53:25.263533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.621 [2024-10-01 01:53:25.263775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.621 [2024-10-01 01:53:25.263799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.621 [2024-10-01 01:53:25.263814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 [2024-10-01 01:53:25.267386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.276835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.277238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.621 [2024-10-01 01:53:25.277271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.621 [2024-10-01 01:53:25.277289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.621 [2024-10-01 01:53:25.277527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.621 [2024-10-01 01:53:25.277768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.621 [2024-10-01 01:53:25.277793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.621 [2024-10-01 01:53:25.277808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 [2024-10-01 01:53:25.281376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.290813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.291210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.621 [2024-10-01 01:53:25.291243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.621 [2024-10-01 01:53:25.291261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.621 [2024-10-01 01:53:25.291499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.621 [2024-10-01 01:53:25.291746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.621 [2024-10-01 01:53:25.291771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.621 [2024-10-01 01:53:25.291787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.621 [2024-10-01 01:53:25.295351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.621 [2024-10-01 01:53:25.304788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.621 [2024-10-01 01:53:25.305217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.305250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.305269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.305506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.305750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.305775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.305791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.309354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.318789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.319219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.319252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.319270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.319507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.319748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.319773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.319789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.323351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.332789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.333209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.333241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.333260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.333499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.333742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.333767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.333783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.337352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.346806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.347217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.347250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.347269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.347507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.347750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.347774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.347790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.351357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.360787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.361224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.361257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.361276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.361514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.361757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.361782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.361797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.365359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.374793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.375188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.375221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.375239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.375478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.375721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.375746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.375762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.379323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.388752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.389172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.389206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.389230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.389469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.389711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.389737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.389753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.393315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.402824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.403231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.403264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.403283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.403521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.403762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.403788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.403804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.407367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.416802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.417204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.417237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.417256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.417494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.417737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.417762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.417778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.421342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.622 [2024-10-01 01:53:25.430781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.622 [2024-10-01 01:53:25.431202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.622 [2024-10-01 01:53:25.431236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.622 [2024-10-01 01:53:25.431255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.622 [2024-10-01 01:53:25.431493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.622 [2024-10-01 01:53:25.431743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.622 [2024-10-01 01:53:25.431768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.622 [2024-10-01 01:53:25.431784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.622 [2024-10-01 01:53:25.435348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.623 [2024-10-01 01:53:25.444611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.623 [2024-10-01 01:53:25.445047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.623 [2024-10-01 01:53:25.445081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.623 [2024-10-01 01:53:25.445100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.623 [2024-10-01 01:53:25.445338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.623 [2024-10-01 01:53:25.445581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.623 [2024-10-01 01:53:25.445606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.623 [2024-10-01 01:53:25.445622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.623 [2024-10-01 01:53:25.449275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.623 [2024-10-01 01:53:25.458513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.623 [2024-10-01 01:53:25.458932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.623 [2024-10-01 01:53:25.458965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.623 [2024-10-01 01:53:25.458983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.623 [2024-10-01 01:53:25.459229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.623 [2024-10-01 01:53:25.459472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.623 [2024-10-01 01:53:25.459497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.623 [2024-10-01 01:53:25.459513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.623 [2024-10-01 01:53:25.463076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.623 [2024-10-01 01:53:25.472511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.623 [2024-10-01 01:53:25.472936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.623 [2024-10-01 01:53:25.472968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.623 [2024-10-01 01:53:25.472987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.623 [2024-10-01 01:53:25.473233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.623 [2024-10-01 01:53:25.473474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.623 [2024-10-01 01:53:25.473500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.623 [2024-10-01 01:53:25.473516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.477076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.486529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.486962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.486994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.487023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.487262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.487502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.487528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.487544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.491105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.500543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.500954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.500986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.501013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.501252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.501493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.501517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.501533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.505098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.514534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.514948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.514981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.515008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.515249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.515490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.515516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.515532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.519095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.528538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.528925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.528956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.528980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.529227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.529470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.529496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.529512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.533074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.542525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.542921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.542953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.542972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.543220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.543462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.543487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.543503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.547064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.556498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.556915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.556947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.556965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.557213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.557455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.557480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.557496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.883 [2024-10-01 01:53:25.561056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.883 [2024-10-01 01:53:25.570492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.883 [2024-10-01 01:53:25.570916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.883 [2024-10-01 01:53:25.570948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.883 [2024-10-01 01:53:25.570966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.883 [2024-10-01 01:53:25.571213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.883 [2024-10-01 01:53:25.571456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.883 [2024-10-01 01:53:25.571486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.883 [2024-10-01 01:53:25.571502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.575067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.584503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.584899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.584932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.584951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.585200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.585444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.585470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.585486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.589044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.598478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.598900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.598933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.598951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.599200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.599444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.599469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.599485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.603044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.612479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.612890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.612923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.612941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.613190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.613431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.613457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.613472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.617030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.626472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.626903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.626936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.626955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.627203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.627448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.627473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.627489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.631047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.640528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.640940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.640973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.640992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.641242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.641498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.641524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.641540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.645100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.654612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.655046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.655080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.655099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.655338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.655581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.655607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.655623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.659187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.668628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.669058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.669091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.669109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.669353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.669594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.669620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.669636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.673198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.682629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.683042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.683075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.683094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.683332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.683573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.683598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.683614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.687177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.696612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.697039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.697074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.697093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.697333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.884 [2024-10-01 01:53:25.697575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.884 [2024-10-01 01:53:25.697600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.884 [2024-10-01 01:53:25.697616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.884 [2024-10-01 01:53:25.701178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.884 [2024-10-01 01:53:25.710610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.884 [2024-10-01 01:53:25.711012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.884 [2024-10-01 01:53:25.711045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.884 [2024-10-01 01:53:25.711064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.884 [2024-10-01 01:53:25.711302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.885 [2024-10-01 01:53:25.711546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.885 [2024-10-01 01:53:25.711571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.885 [2024-10-01 01:53:25.711593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.885 [2024-10-01 01:53:25.715157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:45.885 [2024-10-01 01:53:25.724603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:45.885 [2024-10-01 01:53:25.724970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.885 [2024-10-01 01:53:25.725015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:45.885 [2024-10-01 01:53:25.725036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:45.885 [2024-10-01 01:53:25.725275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:45.885 [2024-10-01 01:53:25.725517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:45.885 [2024-10-01 01:53:25.725542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:45.885 [2024-10-01 01:53:25.725558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:45.885 [2024-10-01 01:53:25.729122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.738573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.738992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.739032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.739055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.739293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.739536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.739561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.739577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.743157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.752407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.752795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.752827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.752845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.753093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.753336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.753360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.753376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.756931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.766407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.766824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.766861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.766881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.767128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.767372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.767396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.767412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.770968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.780418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.780853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.780885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.780904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.781152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.781396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.781421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.781436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.784987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.794439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.794856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.794888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.794906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.795154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.795397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.795422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.795438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.798995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.808452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.808865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.808897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.808915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.809163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.809414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.809439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.809455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.813020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.822470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.822885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.822916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.822935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.823182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.823425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.823449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.823465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.827033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.836478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.836892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.836924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.836943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.837191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.837434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.837458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.837473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.841056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.850313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.850732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.850764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.146 [2024-10-01 01:53:25.850782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.146 [2024-10-01 01:53:25.851030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.146 [2024-10-01 01:53:25.851281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.146 [2024-10-01 01:53:25.851305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.146 [2024-10-01 01:53:25.851321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.146 [2024-10-01 01:53:25.854883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.146 [2024-10-01 01:53:25.864342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.146 [2024-10-01 01:53:25.864708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.146 [2024-10-01 01:53:25.864740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.864758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.865005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.865249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.865274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.865289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.868843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.878295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.878689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.878722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.878740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.878978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.879229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.879255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.879270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.882820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.892276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.892664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.892695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.892714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.892953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.893204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.893230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.893246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.896804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.906352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.906771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.906804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.906830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.907083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.907334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.907359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.907375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.910929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.920374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.920789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.920822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.920841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.921089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.921330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.921355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.921370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.924918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.934203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.934593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.934626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.934644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.934882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.935136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.935162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.935178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.938734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.948196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.948610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.948643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.948662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.948900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.949156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.949192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.949209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.952766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.962213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.962606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.962639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.962657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.962894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.963148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.963174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.963190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.966744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.976187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.976607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.976640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.976659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.976898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.977153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.977179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.977196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.980751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.147 [2024-10-01 01:53:25.990197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.147 [2024-10-01 01:53:25.990594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.147 [2024-10-01 01:53:25.990627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.147 [2024-10-01 01:53:25.990646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.147 [2024-10-01 01:53:25.990885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.147 [2024-10-01 01:53:25.991140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.147 [2024-10-01 01:53:25.991167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.147 [2024-10-01 01:53:25.991183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.147 [2024-10-01 01:53:25.994737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.004192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.004582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.004614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.408 [2024-10-01 01:53:26.004633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.408 [2024-10-01 01:53:26.004870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.408 [2024-10-01 01:53:26.005124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.408 [2024-10-01 01:53:26.005150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.408 [2024-10-01 01:53:26.005167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.408 [2024-10-01 01:53:26.008718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.018162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.018586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.018619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.408 [2024-10-01 01:53:26.018637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.408 [2024-10-01 01:53:26.018875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.408 [2024-10-01 01:53:26.019128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.408 [2024-10-01 01:53:26.019154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.408 [2024-10-01 01:53:26.019170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.408 [2024-10-01 01:53:26.022720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.032169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.032564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.032597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.408 [2024-10-01 01:53:26.032615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.408 [2024-10-01 01:53:26.032853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.408 [2024-10-01 01:53:26.033106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.408 [2024-10-01 01:53:26.033133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.408 [2024-10-01 01:53:26.033148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.408 [2024-10-01 01:53:26.036702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.046164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.046581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.046613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.408 [2024-10-01 01:53:26.046632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.408 [2024-10-01 01:53:26.046876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.408 [2024-10-01 01:53:26.047129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.408 [2024-10-01 01:53:26.047156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.408 [2024-10-01 01:53:26.047172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.408 [2024-10-01 01:53:26.050725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.060166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.060576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.060608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.408 [2024-10-01 01:53:26.060627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.408 [2024-10-01 01:53:26.060864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.408 [2024-10-01 01:53:26.061117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.408 [2024-10-01 01:53:26.061143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.408 [2024-10-01 01:53:26.061160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.408 [2024-10-01 01:53:26.064717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.074158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.074570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.074602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.408 [2024-10-01 01:53:26.074620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.408 [2024-10-01 01:53:26.074862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.408 [2024-10-01 01:53:26.075114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.408 [2024-10-01 01:53:26.075140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.408 [2024-10-01 01:53:26.075157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.408 [2024-10-01 01:53:26.078711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.408 [2024-10-01 01:53:26.088213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.408 [2024-10-01 01:53:26.088624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.408 [2024-10-01 01:53:26.088657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.088676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.088914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.089171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.089197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.089219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.092776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.102224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.102636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.102668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.102686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.102925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.103177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.103203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.103219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.106768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.116213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.116616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.116648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.116666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.116904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.117156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.117181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.117197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.120752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.130201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.130625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.130657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.130676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.130913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.131172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.131197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.131213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.134765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.144226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.144629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.144661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.144680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.144917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.145171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.145198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.145214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.148791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.158356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.158753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.158786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.158806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.159054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.159296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.159321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.159337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.162896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.172357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.172779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.172812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.172831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.173079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.173322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.173346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.173362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.176917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.186374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.186786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.186818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.186837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.187092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.187335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.187359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.187375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.190928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.200376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.200798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.200830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.200848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.201098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.201341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.201365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.201382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.409 [2024-10-01 01:53:26.204932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.409 [2024-10-01 01:53:26.214375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.409 [2024-10-01 01:53:26.214800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.409 [2024-10-01 01:53:26.214833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.409 [2024-10-01 01:53:26.214852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.409 [2024-10-01 01:53:26.215100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.409 [2024-10-01 01:53:26.215342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.409 [2024-10-01 01:53:26.215367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.409 [2024-10-01 01:53:26.215383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.410 [2024-10-01 01:53:26.218938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.410 [2024-10-01 01:53:26.228394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.410 [2024-10-01 01:53:26.228816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.410 [2024-10-01 01:53:26.228847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.410 [2024-10-01 01:53:26.228870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.410 [2024-10-01 01:53:26.229121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.410 [2024-10-01 01:53:26.229363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.410 [2024-10-01 01:53:26.229388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.410 [2024-10-01 01:53:26.229409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.410 [2024-10-01 01:53:26.232986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.410 [2024-10-01 01:53:26.242235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.410 [2024-10-01 01:53:26.242659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.410 [2024-10-01 01:53:26.242692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.410 [2024-10-01 01:53:26.242711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.410 [2024-10-01 01:53:26.242950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.410 [2024-10-01 01:53:26.243202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.410 [2024-10-01 01:53:26.243227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.410 [2024-10-01 01:53:26.243243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.410 [2024-10-01 01:53:26.246810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.410 4373.00 IOPS, 17.08 MiB/s [2024-10-01 01:53:26.257993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.410 [2024-10-01 01:53:26.258429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.410 [2024-10-01 01:53:26.258462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.410 [2024-10-01 01:53:26.258481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.410 [2024-10-01 01:53:26.258718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.410 [2024-10-01 01:53:26.258961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.410 [2024-10-01 01:53:26.258986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.410 [2024-10-01 01:53:26.259012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.671 [2024-10-01 01:53:26.262564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.671 [2024-10-01 01:53:26.272010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.671 [2024-10-01 01:53:26.272398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-10-01 01:53:26.272430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.671 [2024-10-01 01:53:26.272448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.671 [2024-10-01 01:53:26.272687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.671 [2024-10-01 01:53:26.272929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.671 [2024-10-01 01:53:26.272954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.671 [2024-10-01 01:53:26.272971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.671 [2024-10-01 01:53:26.276533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.671 [2024-10-01 01:53:26.285971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.671 [2024-10-01 01:53:26.286365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-10-01 01:53:26.286402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.671 [2024-10-01 01:53:26.286421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.671 [2024-10-01 01:53:26.286659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.671 [2024-10-01 01:53:26.286900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.671 [2024-10-01 01:53:26.286926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.671 [2024-10-01 01:53:26.286942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.671 [2024-10-01 01:53:26.290508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.671 [2024-10-01 01:53:26.299947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.671 [2024-10-01 01:53:26.300340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-10-01 01:53:26.300373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.671 [2024-10-01 01:53:26.300391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.671 [2024-10-01 01:53:26.300629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.671 [2024-10-01 01:53:26.300870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.671 [2024-10-01 01:53:26.300895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.671 [2024-10-01 01:53:26.300910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.671 [2024-10-01 01:53:26.304470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.671 [2024-10-01 01:53:26.313905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.671 [2024-10-01 01:53:26.314332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-10-01 01:53:26.314364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.671 [2024-10-01 01:53:26.314382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.671 [2024-10-01 01:53:26.314620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.671 [2024-10-01 01:53:26.314861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.671 [2024-10-01 01:53:26.314886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.671 [2024-10-01 01:53:26.314902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.671 [2024-10-01 01:53:26.318468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.671 [2024-10-01 01:53:26.327924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.671 [2024-10-01 01:53:26.328348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-10-01 01:53:26.328381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.671 [2024-10-01 01:53:26.328399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.671 [2024-10-01 01:53:26.328637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.671 [2024-10-01 01:53:26.328885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.671 [2024-10-01 01:53:26.328911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.671 [2024-10-01 01:53:26.328927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.671 [2024-10-01 01:53:26.332489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.671 [2024-10-01 01:53:26.341919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.671 [2024-10-01 01:53:26.342313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-10-01 01:53:26.342346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.342364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.342601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.342842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.342867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.342884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.346462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.355898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.356320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.356352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.356371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.356609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.356851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.356876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.356892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.360455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.369889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.370283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.370316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.370334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.370572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.370813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.370838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.370854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.374423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.383865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.384260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.384293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.384311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.384549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.384790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.384815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.384831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.388394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.397832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.398252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.398285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.398303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.398540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.398782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.398807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.398823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.402482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.411705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.412124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.412157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.412175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.412413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.412654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.412680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.412696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.416257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.425702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.426103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.426137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.426162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.426401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.426641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.426667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.426683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.430245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.439687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.440095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.440127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.440146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.440384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.440624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.440649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.440666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.444242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.453688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.454127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.454160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.454179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.454417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.454660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.454685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.454701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.458261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.467711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.672 [2024-10-01 01:53:26.468076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-10-01 01:53:26.468109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.672 [2024-10-01 01:53:26.468127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.672 [2024-10-01 01:53:26.468364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.672 [2024-10-01 01:53:26.468608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.672 [2024-10-01 01:53:26.468640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.672 [2024-10-01 01:53:26.468657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.672 [2024-10-01 01:53:26.472310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.672 [2024-10-01 01:53:26.481549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.673 [2024-10-01 01:53:26.481946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-10-01 01:53:26.481979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.673 [2024-10-01 01:53:26.482006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.673 [2024-10-01 01:53:26.482247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.673 [2024-10-01 01:53:26.482497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.673 [2024-10-01 01:53:26.482521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.673 [2024-10-01 01:53:26.482539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.673 [2024-10-01 01:53:26.486100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.673 [2024-10-01 01:53:26.495535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.673 [2024-10-01 01:53:26.495948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-10-01 01:53:26.495980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.673 [2024-10-01 01:53:26.496005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.673 [2024-10-01 01:53:26.496246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.673 [2024-10-01 01:53:26.496495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.673 [2024-10-01 01:53:26.496520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.673 [2024-10-01 01:53:26.496535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.673 [2024-10-01 01:53:26.500098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.673 [2024-10-01 01:53:26.509543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.673 [2024-10-01 01:53:26.509958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-10-01 01:53:26.509991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.673 [2024-10-01 01:53:26.510018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.673 [2024-10-01 01:53:26.510266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.673 [2024-10-01 01:53:26.510512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.673 [2024-10-01 01:53:26.510538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.673 [2024-10-01 01:53:26.510554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.673 [2024-10-01 01:53:26.514119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.673 [2024-10-01 01:53:26.523567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.934 [2024-10-01 01:53:26.523990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.934 [2024-10-01 01:53:26.524030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.934 [2024-10-01 01:53:26.524050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.934 [2024-10-01 01:53:26.524287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.934 [2024-10-01 01:53:26.524528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.934 [2024-10-01 01:53:26.524552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.934 [2024-10-01 01:53:26.524567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.934 [2024-10-01 01:53:26.528136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.934 [2024-10-01 01:53:26.537572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.934 [2024-10-01 01:53:26.537988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.934 [2024-10-01 01:53:26.538027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.934 [2024-10-01 01:53:26.538047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.934 [2024-10-01 01:53:26.538285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.934 [2024-10-01 01:53:26.538528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.934 [2024-10-01 01:53:26.538553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.934 [2024-10-01 01:53:26.538569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.934 [2024-10-01 01:53:26.542130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.934 [2024-10-01 01:53:26.551589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.934 [2024-10-01 01:53:26.552018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.934 [2024-10-01 01:53:26.552052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.934 [2024-10-01 01:53:26.552070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.934 [2024-10-01 01:53:26.552309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.934 [2024-10-01 01:53:26.552550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.934 [2024-10-01 01:53:26.552575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.934 [2024-10-01 01:53:26.552590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.934 [2024-10-01 01:53:26.556154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.934 [2024-10-01 01:53:26.565595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.934 [2024-10-01 01:53:26.566018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.934 [2024-10-01 01:53:26.566052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.934 [2024-10-01 01:53:26.566078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.934 [2024-10-01 01:53:26.566317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.934 [2024-10-01 01:53:26.566561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.934 [2024-10-01 01:53:26.566586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.934 [2024-10-01 01:53:26.566602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.934 [2024-10-01 01:53:26.570167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.934 [2024-10-01 01:53:26.579608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.934 [2024-10-01 01:53:26.580027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.580060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.580078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.580317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.580558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.580583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.580598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.584166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.593609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.594027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.594059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.594078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.594316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.594557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.594583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.594599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.598163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.607597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.608014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.608047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.608065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.608302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.608543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.608568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.608590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.612158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.621605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.622007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.622040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.622059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.622297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.622538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.622563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.622579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.626149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.635602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.636017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.636050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.636068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.636306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.636548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.636572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.636588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.640156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.649614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.649995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.650036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.650055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.650293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.650535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.650560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.650577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.654275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.663474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.663904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.663937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.663955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.664206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.664448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.664474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.664490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.668051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.677494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.677985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.678062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.678080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.678318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.678559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.678585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.678601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.682166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.691405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.691953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.692017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.692038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.692277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.692517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.692542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.692557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.696121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.935 [2024-10-01 01:53:26.705359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.935 [2024-10-01 01:53:26.705765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.935 [2024-10-01 01:53:26.705798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.935 [2024-10-01 01:53:26.705817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.935 [2024-10-01 01:53:26.706073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.935 [2024-10-01 01:53:26.706314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.935 [2024-10-01 01:53:26.706340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.935 [2024-10-01 01:53:26.706355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.935 [2024-10-01 01:53:26.709914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.936 [2024-10-01 01:53:26.719374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.936 [2024-10-01 01:53:26.719762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.936 [2024-10-01 01:53:26.719795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.936 [2024-10-01 01:53:26.719813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.936 [2024-10-01 01:53:26.720064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.936 [2024-10-01 01:53:26.720307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.936 [2024-10-01 01:53:26.720333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.936 [2024-10-01 01:53:26.720349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.936 [2024-10-01 01:53:26.723904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.936 [2024-10-01 01:53:26.733391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.936 [2024-10-01 01:53:26.733789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.936 [2024-10-01 01:53:26.733822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.936 [2024-10-01 01:53:26.733840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.936 [2024-10-01 01:53:26.734092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.936 [2024-10-01 01:53:26.734335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.936 [2024-10-01 01:53:26.734360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.936 [2024-10-01 01:53:26.734377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.936 [2024-10-01 01:53:26.737933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.936 [2024-10-01 01:53:26.747397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.936 [2024-10-01 01:53:26.747822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.936 [2024-10-01 01:53:26.747855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.936 [2024-10-01 01:53:26.747873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.936 [2024-10-01 01:53:26.748125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.936 [2024-10-01 01:53:26.748367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.936 [2024-10-01 01:53:26.748392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.936 [2024-10-01 01:53:26.748414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.936 [2024-10-01 01:53:26.751971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.936 [2024-10-01 01:53:26.761419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.936 [2024-10-01 01:53:26.761947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.936 [2024-10-01 01:53:26.762011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.936 [2024-10-01 01:53:26.762032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.936 [2024-10-01 01:53:26.762270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.936 [2024-10-01 01:53:26.762511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.936 [2024-10-01 01:53:26.762535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.936 [2024-10-01 01:53:26.762551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.936 [2024-10-01 01:53:26.766121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.936 [2024-10-01 01:53:26.775354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.936 [2024-10-01 01:53:26.775773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.936 [2024-10-01 01:53:26.775806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:46.936 [2024-10-01 01:53:26.775824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:46.936 [2024-10-01 01:53:26.776076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:46.936 [2024-10-01 01:53:26.776319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.936 [2024-10-01 01:53:26.776344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.936 [2024-10-01 01:53:26.776360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.936 [2024-10-01 01:53:26.779917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.196 [2024-10-01 01:53:26.789385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.196 [2024-10-01 01:53:26.789871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.196 [2024-10-01 01:53:26.789903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.196 [2024-10-01 01:53:26.789921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.196 [2024-10-01 01:53:26.790173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.196 [2024-10-01 01:53:26.790415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.196 [2024-10-01 01:53:26.790440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.196 [2024-10-01 01:53:26.790456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.196 [2024-10-01 01:53:26.794024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.196 [2024-10-01 01:53:26.803257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.196 [2024-10-01 01:53:26.803680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.196 [2024-10-01 01:53:26.803717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.196 [2024-10-01 01:53:26.803736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.196 [2024-10-01 01:53:26.803975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.196 [2024-10-01 01:53:26.804231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.196 [2024-10-01 01:53:26.804257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.196 [2024-10-01 01:53:26.804274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.196 [2024-10-01 01:53:26.807830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.196 [2024-10-01 01:53:26.817079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.196 [2024-10-01 01:53:26.817488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.196 [2024-10-01 01:53:26.817520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.196 [2024-10-01 01:53:26.817539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.196 [2024-10-01 01:53:26.817776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.196 [2024-10-01 01:53:26.818030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.818056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.818072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.821627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.831073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.831485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.831517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.831535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.831773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.832028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.832053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.832069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.835623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.845077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.845491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.845524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.845542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.845780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.846055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.846081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.846097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.849652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.858893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.859288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.859321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.859340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.859578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.859820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.859845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.859861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.863435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.872884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.873321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.873353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.873372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.873609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.873852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.873876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.873893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1059882 Killed "${NVMF_APP[@]}" "$@" 00:35:47.197 [2024-10-01 01:53:26.877466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=1060844 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 1060844 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1060844 ']' 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:47.197 01:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.197 [2024-10-01 01:53:26.886717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.887114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.887146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.887166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.887404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.887647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.887671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.887687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.891258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.900721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.901880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.901919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.901940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.902206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.902472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.902499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.902515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.906193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.914559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.914978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.915021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.915043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.915281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.915525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.915550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.915566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.197 [2024-10-01 01:53:26.919140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.197 [2024-10-01 01:53:26.928605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.197 [2024-10-01 01:53:26.929017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.197 [2024-10-01 01:53:26.929052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.197 [2024-10-01 01:53:26.929070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.197 [2024-10-01 01:53:26.929308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.197 [2024-10-01 01:53:26.929551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.197 [2024-10-01 01:53:26.929574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.197 [2024-10-01 01:53:26.929591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:26.933161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:26.935813] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:35:47.198 [2024-10-01 01:53:26.935885] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.198 [2024-10-01 01:53:26.942618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:26.943039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:26.943071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:26.943090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:26.943327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:26.943570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:26.943594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:26.943610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:26.947370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:26.956626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:26.957021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:26.957055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:26.957074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:26.957312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:26.957555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:26.957579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:26.957596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:26.961165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:26.970611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:26.971033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:26.971065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:26.971084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:26.971322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:26.971565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:26.971589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:26.971605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:26.975166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:26.984622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:26.985019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:26.985051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:26.985070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:26.985307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:26.985549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:26.985573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:26.985589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:26.989158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:26.998603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:26.998994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:26.999035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:26.999054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:26.999292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:26.999533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:26.999558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:26.999574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:27.003131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:27.012563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:27.012965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:27.013004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:27.013030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:27.013270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:27.013512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:27.013536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:27.013552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:27.014398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:47.198 [2024-10-01 01:53:27.017111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:27.026600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:27.027227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:27.027270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:27.027292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:27.027550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:27.027796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:27.027820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:27.027839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:27.031422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.198 [2024-10-01 01:53:27.040675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.198 [2024-10-01 01:53:27.041128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.198 [2024-10-01 01:53:27.041161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.198 [2024-10-01 01:53:27.041181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.198 [2024-10-01 01:53:27.041420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.198 [2024-10-01 01:53:27.041663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.198 [2024-10-01 01:53:27.041687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.198 [2024-10-01 01:53:27.041704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.198 [2024-10-01 01:53:27.045269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.459 [2024-10-01 01:53:27.054550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.459 [2024-10-01 01:53:27.054995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.459 [2024-10-01 01:53:27.055045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.055064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.055303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.055558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.055594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.055611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.059173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.068423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.068980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.069042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.069064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.069309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.069555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.069580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.069598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.073170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.082431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.082937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.082983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.083013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.083259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.083503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.083528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.083546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.087110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.096349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.096753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.096786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.096804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.097063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.097307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.097331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.097348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.100906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.109084] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.460 [2024-10-01 01:53:27.109123] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.460 [2024-10-01 01:53:27.109141] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.460 [2024-10-01 01:53:27.109154] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.460 [2024-10-01 01:53:27.109167] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.460 [2024-10-01 01:53:27.109225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:47.460 [2024-10-01 01:53:27.109283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:47.460 [2024-10-01 01:53:27.109287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.460 [2024-10-01 01:53:27.110409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.110868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.110911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.110930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.111178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.111421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.111446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.111463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.115025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.124308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.124916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.124969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.124990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.125258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.125505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.125530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.125549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.129122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.138393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.139002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.139056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.139078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.139325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.139581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.139607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.139625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.143200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.152292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.152880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.152924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.152945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.153202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.153450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.153476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.153494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.157209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.166217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.460 [2024-10-01 01:53:27.166673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.460 [2024-10-01 01:53:27.166715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.460 [2024-10-01 01:53:27.166737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.460 [2024-10-01 01:53:27.166982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.460 [2024-10-01 01:53:27.167239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.460 [2024-10-01 01:53:27.167265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.460 [2024-10-01 01:53:27.167283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.460 [2024-10-01 01:53:27.170844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.460 [2024-10-01 01:53:27.180109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.180712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.180761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.180783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.181042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.181289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.181315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.181334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.184892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 [2024-10-01 01:53:27.194169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.194781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.194825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.194847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.195117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.195363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.195389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.195417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.198972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 [2024-10-01 01:53:27.208207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.208610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.208643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.208662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.208900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.209153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.209179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.209195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.212749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 [2024-10-01 01:53:27.221735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.222131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.222161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.222177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.222392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.222619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.222642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.222656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.225883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.461 [2024-10-01 01:53:27.235395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.235786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.235816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.235833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.236089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.236309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.236346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.236360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.239579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 [2024-10-01 01:53:27.248841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.249247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.249277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.249294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.249537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.249742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.249763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.249777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.252950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.461 3644.17 IOPS, 14.24 MiB/s [2024-10-01 01:53:27.260160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.461 [2024-10-01 01:53:27.262736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.263188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.263218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.263234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.263484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.263726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.263750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.263774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.267342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 [2024-10-01 01:53:27.276546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.276931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.276962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.276981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.277250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.461 [2024-10-01 01:53:27.277509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.461 [2024-10-01 01:53:27.277534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.461 [2024-10-01 01:53:27.277550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.461 [2024-10-01 01:53:27.280843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.461 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.461 [2024-10-01 01:53:27.290047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.461 [2024-10-01 01:53:27.290496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.461 [2024-10-01 01:53:27.290525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.461 [2024-10-01 01:53:27.290542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.461 [2024-10-01 01:53:27.290784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.462 [2024-10-01 01:53:27.291016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.462 [2024-10-01 01:53:27.291054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.462 [2024-10-01 01:53:27.291070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.462 [2024-10-01 01:53:27.294256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.462 [2024-10-01 01:53:27.303563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.462 [2024-10-01 01:53:27.304066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.462 [2024-10-01 01:53:27.304106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.462 [2024-10-01 01:53:27.304126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.462 [2024-10-01 01:53:27.304386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.462 [2024-10-01 01:53:27.304596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.462 [2024-10-01 01:53:27.304619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.462 [2024-10-01 01:53:27.304635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.462 [2024-10-01 01:53:27.307856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.462 Malloc0 00:35:47.462 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.462 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.462 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.462 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.720 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.720 [2024-10-01 01:53:27.317217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.720 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.720 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.720 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.720 [2024-10-01 01:53:27.317606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.720 [2024-10-01 01:53:27.317637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2b090 with addr=10.0.0.2, port=4420 00:35:47.720 [2024-10-01 01:53:27.317653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b090 is same with the state(6) to be set 00:35:47.721 [2024-10-01 01:53:27.317869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b090 (9): Bad file descriptor 00:35:47.721 [2024-10-01 01:53:27.318097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.721 [2024-10-01 01:53:27.318120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.721 [2024-10-01 01:53:27.318134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.721 [2024-10-01 01:53:27.321375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.721 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.721 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.721 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.721 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.721 [2024-10-01 01:53:27.329054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.721 [2024-10-01 01:53:27.330706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.721 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.721 01:53:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1060176 00:35:47.721 [2024-10-01 01:53:27.407119] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:56.599 4104.29 IOPS, 16.03 MiB/s 4635.50 IOPS, 18.11 MiB/s 5051.67 IOPS, 19.73 MiB/s 5391.60 IOPS, 21.06 MiB/s 5663.36 IOPS, 22.12 MiB/s 5890.17 IOPS, 23.01 MiB/s 6087.46 IOPS, 23.78 MiB/s 6251.29 IOPS, 24.42 MiB/s 00:35:56.599 Latency(us) 00:35:56.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.599 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:56.599 Verification LBA range: start 0x0 length 0x4000 00:35:56.599 Nvme1n1 : 15.00 6402.04 25.01 8652.60 0.00 8477.22 691.77 22427.88 00:35:56.599 =================================================================================================================== 00:35:56.599 Total : 6402.04 25.01 8652.60 0.00 8477.22 691.77 22427.88 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.857 rmmod nvme_tcp 00:35:56.857 rmmod nvme_fabrics 00:35:56.857 rmmod nvme_keyring 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 1060844 ']' 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 1060844 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1060844 ']' 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1060844 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:56.857 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:56.858 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1060844 00:35:56.858 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:56.858 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:56.858 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1060844' 00:35:56.858 killing process with pid 1060844 00:35:56.858 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1060844 00:35:56.858 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1060844 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.116 01:53:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.022 01:53:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:59.282 00:35:59.282 real 0m22.450s 00:35:59.282 user 0m59.338s 00:35:59.282 sys 0m4.505s 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.282 ************************************ 00:35:59.282 END TEST nvmf_bdevperf 00:35:59.282 ************************************ 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.282 ************************************ 00:35:59.282 START TEST nvmf_target_disconnect 00:35:59.282 ************************************ 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:59.282 * Looking for test storage... 00:35:59.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:35:59.282 01:53:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:59.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.282 --rc genhtml_branch_coverage=1 00:35:59.282 --rc genhtml_function_coverage=1 00:35:59.282 --rc genhtml_legend=1 00:35:59.282 --rc geninfo_all_blocks=1 00:35:59.282 --rc geninfo_unexecuted_blocks=1 00:35:59.282 00:35:59.282 ' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:59.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.282 --rc genhtml_branch_coverage=1 00:35:59.282 --rc genhtml_function_coverage=1 00:35:59.282 --rc genhtml_legend=1 00:35:59.282 --rc geninfo_all_blocks=1 00:35:59.282 --rc geninfo_unexecuted_blocks=1 00:35:59.282 00:35:59.282 ' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:59.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.282 --rc genhtml_branch_coverage=1 00:35:59.282 --rc genhtml_function_coverage=1 00:35:59.282 --rc genhtml_legend=1 00:35:59.282 --rc geninfo_all_blocks=1 00:35:59.282 --rc geninfo_unexecuted_blocks=1 00:35:59.282 00:35:59.282 ' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:59.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.282 --rc genhtml_branch_coverage=1 00:35:59.282 --rc genhtml_function_coverage=1 00:35:59.282 --rc genhtml_legend=1 00:35:59.282 --rc geninfo_all_blocks=1 00:35:59.282 --rc geninfo_unexecuted_blocks=1 00:35:59.282 00:35:59.282 ' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.282 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:59.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.283 01:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:01.816 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:01.816 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:01.816 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:01.816 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.816 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:36:01.817 00:36:01.817 --- 10.0.0.2 ping statistics --- 00:36:01.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.817 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:36:01.817 00:36:01.817 --- 10.0.0.1 ping statistics --- 00:36:01.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.817 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:01.817 ************************************ 00:36:01.817 START TEST nvmf_target_disconnect_tc1 00:36:01.817 ************************************ 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:01.817 [2024-10-01 01:53:41.312099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.817 [2024-10-01 01:53:41.312175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1871220 with addr=10.0.0.2, port=4420 00:36:01.817 [2024-10-01 01:53:41.312210] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:01.817 [2024-10-01 01:53:41.312251] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:01.817 [2024-10-01 01:53:41.312267] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:01.817 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:01.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:01.817 Initializing NVMe Controllers 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:01.817 00:36:01.817 real 0m0.100s 00:36:01.817 user 0m0.041s 00:36:01.817 sys 0m0.054s 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:01.817 ************************************ 00:36:01.817 END TEST nvmf_target_disconnect_tc1 00:36:01.817 ************************************ 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:01.817 ************************************ 00:36:01.817 START TEST nvmf_target_disconnect_tc2 00:36:01.817 ************************************ 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1063993 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1063993 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1063993 ']' 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:01.817 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.817 [2024-10-01 01:53:41.425683] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:36:01.817 [2024-10-01 01:53:41.425758] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.817 [2024-10-01 01:53:41.491133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.818 [2024-10-01 01:53:41.581336] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.818 [2024-10-01 01:53:41.581391] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.818 [2024-10-01 01:53:41.581406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.818 [2024-10-01 01:53:41.581417] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.818 [2024-10-01 01:53:41.581427] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.818 [2024-10-01 01:53:41.581514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:01.818 [2024-10-01 01:53:41.581575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:01.818 [2024-10-01 01:53:41.581631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:36:01.818 [2024-10-01 01:53:41.581634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 Malloc0 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 [2024-10-01 01:53:41.762704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 [2024-10-01 01:53:41.790991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.078 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.079 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1064119 00:36:02.079 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:02.079 01:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.985 01:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1063993 00:36:03.985 01:53:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Write completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.985 Read completed with error (sct=0, sc=8) 00:36:03.985 starting I/O failed 00:36:03.986 [2024-10-01 01:53:43.815722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 [2024-10-01 01:53:43.816056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Write completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 Read completed with error (sct=0, sc=8) 00:36:03.986 starting I/O failed 00:36:03.986 [2024-10-01 01:53:43.816365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:03.986 [2024-10-01 01:53:43.816631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.986 [2024-10-01 01:53:43.816669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.986 qpair failed and we were unable to recover it. 00:36:03.986 [2024-10-01 01:53:43.816821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.986 [2024-10-01 01:53:43.816850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.986 qpair failed and we were unable to recover it. 00:36:03.986 [2024-10-01 01:53:43.817063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.986 [2024-10-01 01:53:43.817093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.986 qpair failed and we were unable to recover it. 00:36:03.986 [2024-10-01 01:53:43.817210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.817238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.817383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.817412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.817578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.817605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.817767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.817797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.817961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.818005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.818157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.818185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.818298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.818325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.818460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.818487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.818628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.818655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.818793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.818820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.818976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.819046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.819183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.819216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.819355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.819383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.819551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.819578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.819797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.819824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.819964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.820118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.820260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.820405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.820588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.820772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.820949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.820986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.821126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.821152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.821286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.821327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.821575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.821607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.821763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.821793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.821960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.987 [2024-10-01 01:53:43.822004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.987 qpair failed and we were unable to recover it. 00:36:03.987 [2024-10-01 01:53:43.822118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.822145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.822272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.822322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.822492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.822520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.822662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.822688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.822827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.822857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.822991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.823139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.823304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.823485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.823648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.823789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.823950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.823977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.824124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.824151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.824262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.824305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.824469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.824512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.824671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.824715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.824868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.824898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.825030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.825057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.825192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.825223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.825402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.825472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.825608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.825635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.825745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.825772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.825881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.825908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.826016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.826043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.826180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.826207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.826343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.826388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.826598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.988 [2024-10-01 01:53:43.826630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-10-01 01:53:43.826788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.826820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.827004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.827034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.827194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.827221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.827428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.827456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.827580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.827624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.827816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.827844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.827958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.827985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.828114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.828141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.828253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.828280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.829411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.829450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.829667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.829714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.829904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.829934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.830098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.830125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.830260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.830286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.830465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.830491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.830609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.830638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.830814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.830840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.830972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.831009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.831143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.831175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.831317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.831344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.831477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.831503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.831610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.831637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.831830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.831860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.831992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.832027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.832135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.832162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.832271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.832311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.832483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.832524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-10-01 01:53:43.832693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.989 [2024-10-01 01:53:43.832720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.832923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.832968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.833088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.833115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.833254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.833281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.833425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.833452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.833604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.833631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.833833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.833859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.834029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.834057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.834189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.834216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.834384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.834423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.834561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.834590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.834823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.834856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.834963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.835009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.835155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.835181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.835349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.835375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.835512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.835555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.835741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.835768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.835888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.835914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.836063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.836101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-10-01 01:53:43.836244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.990 [2024-10-01 01:53:43.836271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:03.990 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.836430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.836457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.836606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.836633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.836804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.836831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.836956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.836983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.837158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.837185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.837293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.837320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.837480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.837506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.837647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.837673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.837778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.837804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.837946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.837973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.838097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.838124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.838260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.838292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.838432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.838459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.838567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.838594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.838767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.838794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.838930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.838957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.839084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.839112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.839251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.839278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.839431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.839457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.839590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.839617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.839787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.839813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.280 [2024-10-01 01:53:43.839919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.280 [2024-10-01 01:53:43.839946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.280 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.840112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.840141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.840276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.840303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.840471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.840498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.840617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.840645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.840783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.840810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.840970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.841004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.841175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.841202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.841312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.841338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.841512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.841539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.841674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.841702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.841821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.841847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.842011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.842038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.842183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.842210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.842346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.842372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.842533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.842559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.842696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.842724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.842892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.842919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.843059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.843086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.843223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.843250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.843418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.843444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.843589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.843615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.843755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.843783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.843914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.843940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.844059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.844086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.844195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.844222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.844386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.844413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.844552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.844579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.844743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.844770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.844932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.844974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.845117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.845149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.845311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.281 [2024-10-01 01:53:43.845338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.281 qpair failed and we were unable to recover it. 00:36:04.281 [2024-10-01 01:53:43.845513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.845541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.845679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.845705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.845855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.845882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.846027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.846055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.846193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.846219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.846361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.846388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.846532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.846559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.846697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.846723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.846867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.846893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.847018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.847046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.847156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.847183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.847320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.847347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.847515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.847560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.847747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.847774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.847913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.847939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.848106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.848134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.848270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.848296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.848427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.848453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.848616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.848643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.848781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.848807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.848959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.848985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.849137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.849165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.849311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.849338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.849473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.849499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.849624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.849651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.849837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.849864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.850030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.850057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.850196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.850222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.850399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.850425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.850562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.850588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.850752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.850778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.850937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.850964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.851126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.851153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.851293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.851322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.851516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.851543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.851681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.851708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.851850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.851876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.851979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.852011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.852147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.852178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.852341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.852367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.852505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.282 [2024-10-01 01:53:43.852533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.282 qpair failed and we were unable to recover it. 00:36:04.282 [2024-10-01 01:53:43.852672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.852699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.852871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.852898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.853041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.853068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.853188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.853215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.853375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.853401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.853577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.853604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.853740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.853766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.853873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.853900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.854045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.854072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.854235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.854260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.854413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.854440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.854553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.854580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.854716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.854742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.854856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.854883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.855045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.855071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.855241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.855268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.855404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.855430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.855550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.855576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.855741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.855767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.855904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.855930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.856097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.856124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.856276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.856305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.856489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.856527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.856640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.856666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.856807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.856844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.856983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.857016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.857179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.857205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.857370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.857396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.857556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.857583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.857690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.857716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.857827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.857854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.857993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.858025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.858161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.858187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.858325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.858351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.858488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.858514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.858678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.858704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.858850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.858878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.859047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.859081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.859224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.859252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.859393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.283 [2024-10-01 01:53:43.859419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.283 qpair failed and we were unable to recover it. 00:36:04.283 [2024-10-01 01:53:43.859549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.859575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.859714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.859742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.859878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.859904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.860053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.860080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.860216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.860243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.860383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.860409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.860546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.860573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.860719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.860746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.860912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.860939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.861081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.861109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.861272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.861299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.861408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.861434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.861604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.861631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.861795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.861822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.861985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.862154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.862289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.862429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.862565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.862768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.862937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.862964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.863087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.863115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.863230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.863257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.863424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.863450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.863589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.863617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.863754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.863781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.863920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.863946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.864114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.864142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.864259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.864298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.864438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.864464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.864592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.864629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.864743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.864770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.864903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.864929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.865098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.865126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.284 [2024-10-01 01:53:43.865264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.284 [2024-10-01 01:53:43.865290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.284 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.865428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.865453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.865588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.865615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.865719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.865750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.865880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.865907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.866011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.866038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.866200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.866226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.866389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.866416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.866555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.866582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.866724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.866750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.866882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.866909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.867076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.867103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.867215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.867241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.867388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.867415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.867549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.867575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.867743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.867770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.867920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.867950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.868129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.868157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.868293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.868319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.868456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.868483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.868615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.868641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.868791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.868817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.868984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.869016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.869160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.869187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.869298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.869324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.869460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.869488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.869648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.869674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.869803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.869829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.869971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.870002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.870169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.870195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.870341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.870368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.870507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.870534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.870650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.870676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.870844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.870871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.285 [2024-10-01 01:53:43.871040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.285 [2024-10-01 01:53:43.871068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.285 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.871188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.871214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.871357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.871384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.871521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.871547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.871714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.871740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.871899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.871925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.872079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.872109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.872279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.872306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.872445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.872471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.872633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.872665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.872857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.872884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.873014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.873041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.873178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.873205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.873343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.873370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.873508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.873534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.873694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.873738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.873878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.873906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.874068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.874096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.874206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.874234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.874348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.874376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.874512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.874539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.874678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.874705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.874838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.874865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.875952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.875979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.876108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.876134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.876305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.876333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.876438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.876464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.876569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.876595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.876727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.876754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 [2024-10-01 01:53:43.876902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.286 [2024-10-01 01:53:43.876929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.286 qpair failed and we were unable to recover it. 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Write completed with error (sct=0, sc=8) 00:36:04.286 starting I/O failed 00:36:04.286 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Write completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Write completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Write completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Write completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Read completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 Write completed with error (sct=0, sc=8) 00:36:04.287 starting I/O failed 00:36:04.287 [2024-10-01 01:53:43.877267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:04.287 [2024-10-01 01:53:43.877447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.877487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.877659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.877687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.877819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.877846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.877981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.878042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.878211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.878237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.878357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.878385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.878522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.878549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.878716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.878742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.878881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.878909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.879039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.879066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.879200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.879227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.879399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.879426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.879567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.879594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.879754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.879780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.879894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.879921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.880094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.880122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.880236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.880263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.880428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.880455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.880620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.880664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.880828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.880854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.881021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.881049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.881181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.881209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.881374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.881400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.881513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.881541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.881682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.881709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.881841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.881867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.882004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.882032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.882195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.882222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.882370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.882411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.882534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.882562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.882701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.882729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.882866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.882894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.883032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.883061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.883195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.883229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.883383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.883428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.287 qpair failed and we were unable to recover it. 00:36:04.287 [2024-10-01 01:53:43.883596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.287 [2024-10-01 01:53:43.883647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.883813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.883840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.883954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.883982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.884147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.884187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.884321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.884391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.884598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.884648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.884941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.885170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.885335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.885480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.885614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.885781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.885947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.885974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.886143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.886171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.886291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.886317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.886460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.886486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.886623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.886650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.886799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.886840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.886989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.887024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.887189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.887216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.887356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.887383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.887524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.887551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.887683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.887709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.887883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.887910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.888069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.888097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.888259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.888299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.888422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.888450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.888591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.888618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.888751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.888778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.888911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.888951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.889086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.889115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.889259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.889287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.889450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.889478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.889712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.889739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.889850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.889877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.890005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.890046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.890194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.890224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.890344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.890371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.890504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.890536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.890777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.890827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.890953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.890980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.288 qpair failed and we were unable to recover it. 00:36:04.288 [2024-10-01 01:53:43.891103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.288 [2024-10-01 01:53:43.891130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.891272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.891309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.891421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.891448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.891582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.891608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.891717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.891743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.891892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.891932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.892082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.892111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.892251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.892278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.892412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.892439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.892579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.892606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.892740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.892767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.892916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.892944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.893091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.893118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.893231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.893258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.893398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.893425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.893531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.893558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.893696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.893722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.893827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.893853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.894060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.894227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.894369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.894560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.894746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.894875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.894988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.895159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.895322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.895486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.895639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.895825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.895956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.895982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.896094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.896120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.896277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.896317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.896485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.896513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.896678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.896705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.289 qpair failed and we were unable to recover it. 00:36:04.289 [2024-10-01 01:53:43.896848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.289 [2024-10-01 01:53:43.896876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.897052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.897080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.897197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.897223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.897365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.897392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.897524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.897550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.897684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.897710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.897850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.897876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.898043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.898070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.898186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.898213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.898381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.898410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.898546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.898573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.898689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.898717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.898880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.898907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.899115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.899142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.899251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.899278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.899424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.899450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.899585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.899616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.899750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.899776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.899908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.899935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.900060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.900088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.900204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.900230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.900395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.900421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.900561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.900587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.900748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.900774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.900912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.900938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.901104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.901261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.901397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.901544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.901715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.901875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.901981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.902013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.902152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.902179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.902326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.902366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.902486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.902513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.902630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.902658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.902795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.902822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.902959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.903003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.903118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.903145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.903310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.903337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.903459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.903486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.903653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.290 [2024-10-01 01:53:43.903680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.290 qpair failed and we were unable to recover it. 00:36:04.290 [2024-10-01 01:53:43.903819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.903846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.903983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.904025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.904164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.904191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.904330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.904356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.904521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.904548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.904655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.904682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.904819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.904848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.905013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.905041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.905177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.905204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.905349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.905375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.905515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.905542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.905704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.905731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.905869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.905897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.906037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.906065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.906206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.906232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.906402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.906428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.906563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.906589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.906726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.906753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.906911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.906942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.907109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.907137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.907279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.907313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.907450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.907478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.907656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.907683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.907824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.907851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.908048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.908088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.908209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.908237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.908347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.908375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.908536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.908562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.908723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.908756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.908886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.908913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.909024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.909053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.909221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.909247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.909389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.909416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.909555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.909581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.909742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.909768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.909931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.909958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.910083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.910111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.910228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.910255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.910355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.910382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.910515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.910542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.910686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.910712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.291 qpair failed and we were unable to recover it. 00:36:04.291 [2024-10-01 01:53:43.910844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.291 [2024-10-01 01:53:43.910870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.911032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.911073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.911216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.911244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.911363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.911391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.911552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.911579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.911707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.911733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.911894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.911921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.912040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.912068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.912207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.912234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.912366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.912393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.912502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.912528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.912663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.912690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.912848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.912875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.913027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.913055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.913221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.913253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.913368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.913395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.913559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.913585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.913697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.913724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.913865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.913891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.914030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.914058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.914195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.914222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.914356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.914384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.914521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.914547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.914685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.914711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.914847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.914874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.915008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.915055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.915214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.915241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.915389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.915415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.915563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.915604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.915766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.915792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.915954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.915979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.916135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.916162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.916273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.916299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.916459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.916486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.916595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.916621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.916759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.916785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.916925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.916951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.917092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.917119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.917280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.917306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.917470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.917497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.917664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.917691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.917849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.917880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.918046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.918073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.292 [2024-10-01 01:53:43.918233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.292 [2024-10-01 01:53:43.918260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.292 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.918397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.918424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.918541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.918567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.918736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.918763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.918900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.918926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.919075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.919102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.919214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.919242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.919365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.919391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.919527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.919553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.919715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.919742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.919930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.919957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.920105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.920132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.920256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.920282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.920419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.920445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.920551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.920577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.920710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.920737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.920901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.920945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.921090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.921117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.921253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.921279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.921412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.921437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.921550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.921576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.921749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.921775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.921902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.921929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.922076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.922104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.922240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.922266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.922441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.922471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.922568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.922594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.922706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.922733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.922874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.922901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.923040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.923066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.923182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.923208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.923384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.923411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.923572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.923597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.923707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.923734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.923891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.923935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.924073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.924100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.924232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.924258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.924359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.924385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.293 [2024-10-01 01:53:43.924494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.293 [2024-10-01 01:53:43.924519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.293 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.924660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.924687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.924827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.924853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.924966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.924991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.925134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.925160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.925297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.925323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.925469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.925495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.925637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.925662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.925770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.925796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.925957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.925984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.926155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.926181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.926293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.926319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.926432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.926458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.926596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.926622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.926733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.926765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.926900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.926927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.927066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.927093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.927239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.927265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.927430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.927456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.927624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.927650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.927796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.927822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.927984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.928135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.928291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.928417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.928576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.928716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.928914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.928939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.929089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.929116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.929215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.929240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.929377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.929403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.929540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.929567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.929703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.929728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.929885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.929911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.930046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.930073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.930210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.930236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.930398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.930424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.930559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.930584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.930727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.930754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.930920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.930945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.931058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.931084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.931220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.931247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.931420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.931446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.931583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.931609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.294 qpair failed and we were unable to recover it. 00:36:04.294 [2024-10-01 01:53:43.931746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.294 [2024-10-01 01:53:43.931772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.931910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.931936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.932923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.932949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.933095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.933122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.933273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.933299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.933445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.933471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.933640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.933670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.933806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.933832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.933969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.933995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.934134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.934160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.934301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.934327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.934466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.934491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.934629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.934655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.934796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.934822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.934939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.934964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.935122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.935149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.935348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.935374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.935510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.935536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.935673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.935699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.935843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.935869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.936010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.936053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.936220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.936247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.936384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.936410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.936592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.936619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.936760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.936786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.936934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.936960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.937102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.937128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.937266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.937291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.937450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.937477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.937640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.937666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.937805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.937830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.937970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.938003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.938163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.938197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.938363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.938389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.938530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.938556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.938716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.938741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.938879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.938905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.295 qpair failed and we were unable to recover it. 00:36:04.295 [2024-10-01 01:53:43.939087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.295 [2024-10-01 01:53:43.939114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.939301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.939330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.939482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.939508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.939625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.939651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.939792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.939818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.939949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.939974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.940109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.940149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.940275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.940304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.940462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.940507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.940700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.940730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.940889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.940916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.941057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.941086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.941245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.941290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.941459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.941486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.941597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.941624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.941795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.941824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.941966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.941993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.942173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.942200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.942339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.942366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.942557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.942583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.942723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.942749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.942885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.942912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.943063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.943095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.943260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.943287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.943429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.943455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.943599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.943625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.943798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.943824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.943961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.943988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.944120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.944146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.944306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.944331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.944445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.944472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.944602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.944627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.944768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.944795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.944993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.945024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.945166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.945192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.945324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.945350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.945512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.945542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.945663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.945692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.945863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.945889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.946045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.946073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.946209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.946235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.946425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.946450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.946590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.946617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.946721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.296 [2024-10-01 01:53:43.946747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.296 qpair failed and we were unable to recover it. 00:36:04.296 [2024-10-01 01:53:43.946879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.946904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.947031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.947058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.947220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.947245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.947417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.947443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.947579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.947605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.947766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.947829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.947994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.948031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.948197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.948225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.948365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.948392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.948594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.948620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.948733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.948760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.948927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.948954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.949075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.949103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.949245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.949271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.949460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.949501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.949650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.949677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.949816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.949842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.949978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.950154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.950323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.950479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.950646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.950784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.950930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.950959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.951090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.951117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.951243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.951272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.951442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.951468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.951632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.951658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.951763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.951789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.951946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.951986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.952115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.952145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.952258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.952285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.952433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.952469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.952656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.952683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.952820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.952847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.952986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.953019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.953184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.953211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.297 [2024-10-01 01:53:43.953400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.297 [2024-10-01 01:53:43.953454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.297 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.953646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.953673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.953783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.953809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.953939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.953976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.954146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.954174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.954318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.954354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.954465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.954492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.954649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.954678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.954834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.954864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.955038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.955082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.955198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.955225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.955396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.955424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.955563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.955590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.955757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.955794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.955984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.956019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.956156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.956183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.956316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.956344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.956494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.956520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.956629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.956656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.956787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.956814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.956992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.957047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.957182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.957221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.957376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.957417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.957538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.957568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.957757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.957803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.957945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.957973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.958126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.958154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.958318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.958345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.958488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.958516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.958649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.958677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.958846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.958875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.959043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.959071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.959210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.959238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.959387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.959414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.959595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.959622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.959759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.959791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.959943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.959971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.960111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.960138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.960302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.960329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.960463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.960490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.960659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.960686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.960822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.960849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.961013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.961041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.298 [2024-10-01 01:53:43.961157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.298 [2024-10-01 01:53:43.961184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.298 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.961326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.961354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.961523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.961567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.961699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.961744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.961919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.961949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.962115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.962143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.962262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.962289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.962486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.962513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.962649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.962676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.962785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.962812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.962995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.963032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.963184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.963211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.963347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.963374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.963525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.963556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.963688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.963716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.963851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.963890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.964056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.964084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.964219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.964246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.964385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.964412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.964580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.964608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.964739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.964767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.964905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.964932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.965070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.965099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.965235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.965262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.965405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.965432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.965559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.965586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.965702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.965729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.965928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.965956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.966135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.966162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.966300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.966328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.966469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.966496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.966645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.966672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.966808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.966841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.967931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.967958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.968081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.968109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.968250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.968277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.968394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.968420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.968560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.299 [2024-10-01 01:53:43.968588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.299 qpair failed and we were unable to recover it. 00:36:04.299 [2024-10-01 01:53:43.968744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.968775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.968966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.968992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.969147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.969174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.969312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.969340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.969456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.969483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.969648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.969691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.969848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.969878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.970057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.970084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.970196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.970224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.970387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.970430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.970597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.970624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.970768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.970796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.970902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.970931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.971095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.971123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.971224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.971251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.971414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.971442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.971574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.971606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.971755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.971801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.972037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.972204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.972369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.972538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.972679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.972873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.972981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.973151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.973304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.973443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.973595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.973757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.973902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.973931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.974087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.974115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.974249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.974278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.974391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.974420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.974582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.974610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.974748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.974775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.974925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.974952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.975125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.975152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.975300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.975328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.975462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.975488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.975649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.975676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.975848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.975876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.976047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.976075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.300 qpair failed and we were unable to recover it. 00:36:04.300 [2024-10-01 01:53:43.976217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.300 [2024-10-01 01:53:43.976244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.976410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.976438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.976582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.976610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.976808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.976837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.976974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.977022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.977171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.977197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.977327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.977354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.977518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.977546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.977663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.977690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.977828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.977855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.978010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.978038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.978178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.978205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.978325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.978352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.978486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.978513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.978683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.978710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.978849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.978876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.979012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.979040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.979205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.979232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.979392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.979419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.979553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.979580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.979717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.979744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.979883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.979911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.980081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.980109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.980273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.980301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.980398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.980426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.980528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.980560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.980698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.980726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.980864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.980893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.981032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.981060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.981226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.981253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.981389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.981416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.981555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.981582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.981722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.981749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.981890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.981917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.982054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.982082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.982220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.982248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.982388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.982415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.982557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.982601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.301 [2024-10-01 01:53:43.982759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.301 [2024-10-01 01:53:43.982786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.301 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.982951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.982978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.983120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.983147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.983266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.983293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.983424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.983451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.983586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.983614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.983778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.983805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.983958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.983987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.984162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.984189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.984331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.984359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.984488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.984514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.984653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.984680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.984860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.984887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.985026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.985055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.985196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.985223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.985367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.985411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.985600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.985626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.985769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.985795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.985903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.985931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.986067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.986095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.986233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.986261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.986397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.986428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.986617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.986643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.986814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.986841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.987018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.987047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.987158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.987186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.987290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.987317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.987434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.987466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.987631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.987658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.987829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.987856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.988909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.988936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.989116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.989261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.989389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.989530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.989698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.989866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.302 [2024-10-01 01:53:43.989984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.302 [2024-10-01 01:53:43.990019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.302 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.990207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.990237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.990376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.990404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.990537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.990564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.990725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.990755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.990864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.990892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.991029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.991057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.991285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.991322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.991491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.991518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.991655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.991682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.991890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.991916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.992954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.992981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.993179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.993222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.993356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.993384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.993517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.993544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.993722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.993749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.993881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.993908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.994963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.994990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.995131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.995158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.995268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.995294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.995440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.995466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.995623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.995667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.995802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.995829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.995968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.995995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.996112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.996141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.996277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.996313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.996491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.996521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.996693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.996720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.996825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.996851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.997018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.997046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.997186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.997213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.997357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.303 [2024-10-01 01:53:43.997384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.303 qpair failed and we were unable to recover it. 00:36:04.303 [2024-10-01 01:53:43.997498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.997526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.997644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.997671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.997808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.997835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.997979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.998017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.998161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.998189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.998354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.998381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.998522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.998549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.998685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.998717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.998858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.998885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.999054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.999082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.999215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.999242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.999354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.999382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.999519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.999546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.999711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.999739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:43.999873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:43.999900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.000014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.000042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.000206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.000233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.000432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.000459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.000575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.000613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.000747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.000775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.000937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.000982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.001165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.001193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.001308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.001335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.001448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.001475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.001605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.001632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.001763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.001790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.001955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.001985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.002139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.002167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.002305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.002332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.002509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.002536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.002674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.002702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.002831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.002858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.003012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.003040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.003200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.003228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.003368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.003395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.003506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.003534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.003698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.003726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.003928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.003955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.004074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.004102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.004305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.004332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.004473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.004500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.004613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.004640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.004811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.304 [2024-10-01 01:53:44.004838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.304 qpair failed and we were unable to recover it. 00:36:04.304 [2024-10-01 01:53:44.004975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.005010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.005169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.005199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.005362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.005389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.005552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.005578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.005745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.005777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.005913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.005940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.006071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.006099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.006241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.006269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.006377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.006405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.006522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.006548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.006688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.006715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.006885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.006913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.007051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.007078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.007213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.007240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.007371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.007398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.007562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.007591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.007721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.007748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.007861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.007889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.008063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.008221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.008387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.008532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.008723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.008882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.008995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.009030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.009166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.009194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.009305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.009332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.009494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.009521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.009625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.009652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.009789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.009816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.009995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.010171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.010364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.010506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.010667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.010803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.010965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.010992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.305 qpair failed and we were unable to recover it. 00:36:04.305 [2024-10-01 01:53:44.011118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.305 [2024-10-01 01:53:44.011146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.011310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.011338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.011481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.011523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.011659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.011686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.011798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.011825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.011987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.012023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.012165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.012207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.012344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.012376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.012517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.012544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.012737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.012766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.012907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.012935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.013045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.013073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.013216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.013243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.013350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.013377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.013490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.013518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.013655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.013682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.013847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.013874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.014036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.014064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.014207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.014234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.014382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.014409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.014572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.014599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.014771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.014816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.014931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.014980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.015137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.015164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.015271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.015299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.015427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.015454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.015621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.015648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.015765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.015792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.015954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.015981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.016161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.016191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.016368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.016398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.016551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.016578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.016692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.016719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.016881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.016908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.017075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.017103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.017244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.017271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.017406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.017433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.017564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.017591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.017725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.017752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.017901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.017928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.018069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.018096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.018253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.018281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.018386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.018414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.306 [2024-10-01 01:53:44.018574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.306 [2024-10-01 01:53:44.018601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.306 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.018780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.018809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.018974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.019008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.019157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.019185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.019319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.019350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.019458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.019485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.019652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.019679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.019822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.019876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.019987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.020021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.020197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.020241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.020356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.020399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.020510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.020537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.020642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.020669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.020771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.020798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.020973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.021026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.021183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.021213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.021403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.021430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.021583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.021613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.021769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.021799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.021915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.021958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.022110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.022138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.022294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.022326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.022464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.022493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.022634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.022665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.022792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.022819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.022955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.022982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.023151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.023178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.023342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.023385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.023547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.023575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.023755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.023785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.023949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.023977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.024128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.024156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.024263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.024290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.024422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.024449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.024638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.024668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.024796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.024827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.024954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.024981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.025130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.025158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.025339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.025366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.025479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.025506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.025617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.025644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.025780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.025808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.026007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.026038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.307 [2024-10-01 01:53:44.026232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.307 [2024-10-01 01:53:44.026259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.307 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.026406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.026438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.026577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.026604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.026765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.026810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.027011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.027050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.027200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.027227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.027349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.027377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.027523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.027558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.027718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.027748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.027939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.027967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.028110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.028137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.028300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.028327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.028441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.028468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.028584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.028612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.028723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.028751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.028919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.028946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.029113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.029142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.029256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.029284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.029446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.029476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.029622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.029653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.029779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.029806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.029994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.030051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.030166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.030194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.030364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.030392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.030521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.030548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.030688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.030716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.030853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.030881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.030995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.031030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.031144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.031172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.031308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.031343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.031472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.031499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.031642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.031670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.031811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.031839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.031979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.032013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.032150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.032193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.032338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.032368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.032501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.032529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.032661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.032688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.032847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.032893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.033039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.033075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.033209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.033236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.033373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.033405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.033540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.033571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.033738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.308 [2024-10-01 01:53:44.033768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.308 qpair failed and we were unable to recover it. 00:36:04.308 [2024-10-01 01:53:44.033896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.033924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.034098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.034142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.034313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.034352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.034465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.034492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.034621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.034648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.034782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.034810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.034930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.034957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.035095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.035123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.035261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.035289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.035447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.035478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.035648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.035679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.035839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.035870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.036032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.036068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.036209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.036236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.036348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.036387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.036559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.036593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.036767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.036795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.036901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.036945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.037107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.037135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.037245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.037272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.037415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.037442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.037561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.037605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.037784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.037814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.037946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.037973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.038142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.038171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.038286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.038323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.038469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.038497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.038610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.038638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.038772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.038803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.038954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.038984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.039121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.039148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.039267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.039294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.039430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.039458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.039645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.039676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.039860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.039890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.040046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.040074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.040212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.040239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.309 [2024-10-01 01:53:44.040361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.309 [2024-10-01 01:53:44.040396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.309 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.040525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.040569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.040719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.040750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.040889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.040930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.041078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.041105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.041210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.041238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.041366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.041393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.041556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.041586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.041707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.041737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.041868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.041918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.042085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.042113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.042223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.042251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.042441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.042471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.042590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.042633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.042774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.042818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.042977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.043016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.043155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.043182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.043312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.043339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.043521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.043550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.043704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.043733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.043931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.043961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.044099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.044126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.044230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.044258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.044404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.044435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.044605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.044635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.044782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.044812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.044957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.044987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.045164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.045192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.045357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.045388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.045570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.045629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.045750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.045780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.045958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.045988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.046122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.046150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.046289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.046316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.046455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.046500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.046635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.046662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.046839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.046869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.047004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.047032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.047151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.047179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.047307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.047337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.047523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.047558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.047815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.047845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.047993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.048049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.048155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.048182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.310 qpair failed and we were unable to recover it. 00:36:04.310 [2024-10-01 01:53:44.048323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.310 [2024-10-01 01:53:44.048353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.048498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.048529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.048694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.048721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.048881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.048911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.049084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.049112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.049219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.049267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.049456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.049485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.049633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.049663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.049826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.049856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.050003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.050041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.050153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.050180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.050280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.050307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.050437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.050464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.050633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.050660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.050823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.050850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.051018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.051045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.051154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.051181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.051343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.051373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.051522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.051553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.051699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.051729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.051852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.051882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.052013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.052042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.052158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.052185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.052348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.052379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.052533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.052577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.052821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.052851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.053042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.053070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.053204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.053231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.053375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.053405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.053581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.053630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.053749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.053792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.053912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.053942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.054859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.054894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.055065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.055094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.055208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.055237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.055375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.055402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.055588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.055624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.055847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.055875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.056015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.056043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.056153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.056180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.056322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.056352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.056564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.056594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.056771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.056801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.311 [2024-10-01 01:53:44.056947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.311 [2024-10-01 01:53:44.056978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.311 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.057119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.057147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.057287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.057314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.057476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.057506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.057636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.057666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.057846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.057876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.058030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.058064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.058175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.058202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.058328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.058358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.058508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.058538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.058752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.058782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.058962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.058992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.059178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.059205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.059351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.059378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.059552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.059582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.059736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.059766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.059928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.059959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.060137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.060165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.060283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.060310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.060425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.060452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.060589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.060619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.060732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.060762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.060968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.061016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.061174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.061201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.061362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.061392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.061548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.061578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.061713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.061741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.061883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.061910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.062039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.062070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.062218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.062249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.062407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.062434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.062563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.062591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.062721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.062749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.062906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.062940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.063107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.063134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.063238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.063266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.063413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.063441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.063607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.063634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.063774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.063801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.063935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.063963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.064114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.064159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.064335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.064395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.064583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.064611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.064803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.064833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.065014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.065045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.065193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.065223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.065408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.065435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.065594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.065624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.312 [2024-10-01 01:53:44.065800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.312 [2024-10-01 01:53:44.065833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.312 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.065992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.066028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.066169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.066196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.066320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.066364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.066527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.066554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.066717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.066744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.066941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.066968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.067116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.067144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.067336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.067365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.067516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.067546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.067706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.067733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.067839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.067865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.068945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.068972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.069121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.069149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.069263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.069307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.069483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.069513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.069657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.069685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.069815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.069843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.070023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.070077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.070187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.070218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.070358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.070387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.070548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.070578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.070728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.070768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.070896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.070926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.071077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.071105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.071246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.071272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.071408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.071435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.071565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.071609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.071788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.071815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.071988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.072025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.072145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.072175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.072326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.072356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.072519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.072546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.072687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.072714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.072906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.072936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.073080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.073111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.073250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.073277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.073410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.073438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.073553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.073581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.073764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.073794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.073958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.073985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.074104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.074132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.074240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.074268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.074431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.074461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.074651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.313 [2024-10-01 01:53:44.074679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.313 qpair failed and we were unable to recover it. 00:36:04.313 [2024-10-01 01:53:44.074839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.074869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.075047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.075099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.075262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.075295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.075459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.075493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.075679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.075710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.075863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.075892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.076084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.076116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.076269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.076301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.076434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.076463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.076619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.076645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.076786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.076815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.076956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.076983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.077116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.077144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.077290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.077319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.077489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.077525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.077657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.077685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.077803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.077831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.078004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.078038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.078208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.078235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.078381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.078411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.078535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.078582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.078698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.078728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.078881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.078914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.079070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.079098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.079239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.079282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.079402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.079439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.079607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.079635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.079773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.079800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.080004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.080040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.080217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.080247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.080406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.080440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.080618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.080647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.080782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.080827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.080991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.081031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.081186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.081217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.081381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.081407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.081514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.081567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.081710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.081741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.081888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.081918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.082059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.082087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.082229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.082257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.082444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.082490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.082695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.082728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.082869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.082897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.083037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.083066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.083240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.083267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.083404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.083431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.083615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.083642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.083796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.083826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.084006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.084037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.084162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.084193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.084347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.084374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.084498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.084543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.084715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.084772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.084948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.084983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.085160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.085188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.085347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.085377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.314 [2024-10-01 01:53:44.085498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.314 [2024-10-01 01:53:44.085526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.314 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.085682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.085707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.085843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.085869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.085974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.086010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.086163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.086188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.086347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.086374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.086508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.086533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.086697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.086722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.086852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.086879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.087007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.087036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.087198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.087223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.087368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.087393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.087531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.087559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.087673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.087701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.087851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.087876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.088013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.088039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.088149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.088174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.088362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.088391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.088550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.088576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.088712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.088755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.088914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.088942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.089097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.089125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.089324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.089351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.089507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.089537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.089687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.089722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.089876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.089905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.090039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.090067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.090205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.090232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.090400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.090426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.090587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.090617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.090750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.090777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.090880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.090908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.091043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.091071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.091183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.091210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.091358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.091385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.091493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.091519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.091681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.091708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.091838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.091868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.092028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.092055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.092162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.092190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.092379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.092409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.092570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.092597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.092733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.092760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.092942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.092971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.093117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.093162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.093328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.093360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.093511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.093542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.093671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.093701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.093868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.093914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.094068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.094101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.094235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.094262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.094384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.094412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.094577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.094615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.094774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.094805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.094991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.095029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.095188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.315 [2024-10-01 01:53:44.095217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.315 qpair failed and we were unable to recover it. 00:36:04.315 [2024-10-01 01:53:44.095369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.095399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.095551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.095580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.095730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.095758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.095895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.095923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.096088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.096133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.096255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.096285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.096420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.096447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.096558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.096584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.096764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.096796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.096972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.097008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.097190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.097219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.097357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.097408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.097594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.097626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.097783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.097813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.097940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.097974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.098104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.098133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.098271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.098298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.098442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.098482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.098653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.098682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.098793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.098819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.098956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.099018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.099177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.099205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.099350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.099377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.099526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.099555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.099711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.099741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.099865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.099897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.100060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.100089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.100235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.100263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.100395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.100422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.100546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.100579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.100747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.100775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.100879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.100905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.101050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.101085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.101225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.101272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.101414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.101442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.101617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.101649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.101800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.101830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.101980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.102020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.102169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.102203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.102324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.102354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.102515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.102547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.102705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.102737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.316 [2024-10-01 01:53:44.102864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.316 [2024-10-01 01:53:44.102893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.316 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.103034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.103063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.103200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.103229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.103425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.103455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.103607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.103634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.103801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.103848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.103978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.104022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.104167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.104197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.104388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.104416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.104734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.104771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.104943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.104978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.105106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.105137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.105323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.105353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.105523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.105553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.105701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.610 [2024-10-01 01:53:44.105731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.610 qpair failed and we were unable to recover it. 00:36:04.610 [2024-10-01 01:53:44.105871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.105924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.106093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.106121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.106246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.106278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.106427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.106461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.106631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.106663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.106834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.106862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.107011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.107048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.107256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.107312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.107472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.107514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.107651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.107680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.107831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.107859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.108040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.108078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.108232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.108262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.108414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.108441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.108549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.108576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.108728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.108758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.108904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.108933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.109064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.109090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.109204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.109232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.109401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.109431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.109580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.109610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.109775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.109802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.109915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.109957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.110123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.110153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.110285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.110315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.110534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.110561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.110718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.110747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.110862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.110893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.111061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.111092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.111225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.111252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.111370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.111397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.111566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.111605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.111781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.111812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.611 [2024-10-01 01:53:44.111942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.611 [2024-10-01 01:53:44.111967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.611 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.112091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.112118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.112226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.112253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.112405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.112434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.112618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.112645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.112860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.112888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.113069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.113097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.113217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.113244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.113367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.113395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.113502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.113529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.113690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.113721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.113851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.113879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.114045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.114072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.114184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.114231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.114386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.114413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.114520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.114548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.114691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.114718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.114833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.114876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.115056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.115087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.115237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.115267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.115394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.115422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.115562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.115607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.115830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.115860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.116924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.116951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.117130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.117161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.117344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.117374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.117517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.117545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.117678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.117706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.612 qpair failed and we were unable to recover it. 00:36:04.612 [2024-10-01 01:53:44.117811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.612 [2024-10-01 01:53:44.117838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.118006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.118049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.118170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.118197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.118317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.118344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.118507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.118542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.118716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.118746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.118901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.118928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.119067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.119112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.119237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.119267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.119398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.119429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.119587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.119614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.119753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.119798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.119976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.120014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.120196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.120225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.120384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.120411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.120549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.120576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.120767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.120794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.120934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.120961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.121132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.121159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.121307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.121334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.121498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.121528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.121643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.121673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.121857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.121884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.122069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.122100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.122280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.122310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.122464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.122494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.122679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.122706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.122822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.122850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.122987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.123021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.123143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.123172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.123298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.123326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.123468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.123495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.123688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.123717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.123837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.123866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.613 qpair failed and we were unable to recover it. 00:36:04.613 [2024-10-01 01:53:44.124053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.613 [2024-10-01 01:53:44.124081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.124238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.124278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.124439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.124467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.124608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.124635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.124779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.124806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.124916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.124944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.125132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.125162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.125286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.125316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.125479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.125506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.125642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.125687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.125829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.125863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.126018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.126059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.126204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.126231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.126355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.126382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.126525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.126553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.126722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.126752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.126949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.126976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.127122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.127163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.127318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.127348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.127490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.127525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.127677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.127703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.127815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.127848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.127993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.128031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.128182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.128209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.128363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.128390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.128513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.128540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.128664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.128691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.128842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.128871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.129005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.129032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.129150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.129177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.129342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.129373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.129526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.129555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.129720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.129747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.129927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.129957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.614 qpair failed and we were unable to recover it. 00:36:04.614 [2024-10-01 01:53:44.130118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.614 [2024-10-01 01:53:44.130144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.130255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.130281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.130411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.130443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.130594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.130625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.130787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.130814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.130923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.130950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.131097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.131123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.131231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.131263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.131451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.131481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.131638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.131667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.131797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.131823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.131935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.131962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.132116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.132142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.132282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.132309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.132451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.132483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.132627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.132653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.132792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.132819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.133060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.133202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.133375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.133560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.133712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.133878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.133990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.134024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.134142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.134168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.134312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.134343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.134503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.134531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.134699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.615 [2024-10-01 01:53:44.134725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.615 qpair failed and we were unable to recover it. 00:36:04.615 [2024-10-01 01:53:44.134843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.134871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.135049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.135192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.135368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.135547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.135693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.135862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.135979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.136186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.136354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.136496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.136650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.136815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.136957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.136982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.137140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.137168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.137277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.137303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.137446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.137473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.137604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.137639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.137813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.137839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.137945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.137973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.138122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.138148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.138291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.138317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.138433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.138462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.138599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.138625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.138770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.138798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.138938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.138964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.139096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.139124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.139246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.139275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.139413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.139441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.139558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.139585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.139738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.139765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.139880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.139911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.140109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.140137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.140268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.140296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.616 [2024-10-01 01:53:44.140424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.616 [2024-10-01 01:53:44.140451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.616 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.140586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.140614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.140799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.140827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.140941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.140968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.141127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.141153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.141301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.141329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.141464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.141493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.141638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.141667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.141838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.141864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.141992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.142025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.142187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.142214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.142368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.142411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.142589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.142615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.142772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.142799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.142945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.142973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.143125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.143153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.143309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.143335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.143476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.143519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.143683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.143712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.143835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.143864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.144020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.144050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.144172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.144199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.144346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.144372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.144535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.144562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.144735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.144765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.144879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.144922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.145052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.145080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.145218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.145245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.145400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.145427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.145568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.145609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.145792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.145824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.145992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.146024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.146139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.146170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.146335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.146376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.617 [2024-10-01 01:53:44.146548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.617 [2024-10-01 01:53:44.146575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.617 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.146756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.146782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.146930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.146963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.147113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.147139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.147252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.147283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.147394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.147420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.147563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.147589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.147712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.147756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.147915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.147940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.148082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.148110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.148232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.148259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.148404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.148445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.148607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.148634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.148748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.148775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.148922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.148948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.149070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.149097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.149237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.149264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.149407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.149432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.149579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.149607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.149717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.149744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.149886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.149912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.150051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.150078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.150218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.150244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.150383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.150425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.150545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.150573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.150721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.150749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.150920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.150947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.151066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.151092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.151233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.151260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.151383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.151415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.151552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.151586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.151725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.151773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.151920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.151948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.152109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.152136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.152277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.618 [2024-10-01 01:53:44.152303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.618 qpair failed and we were unable to recover it. 00:36:04.618 [2024-10-01 01:53:44.152440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.152466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.152572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.152607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.152712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.152744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.152909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.152935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.153071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.153098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.153237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.153271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.153394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.153420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.153593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.153627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.153754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.153798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.153911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.153953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.154071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.154099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.154238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.154264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.154427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.154455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.154600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.154629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.154744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.154779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.154921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.154947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.155088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.155116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.155258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.155302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.155437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.155465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.155641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.155669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.155834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.155864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.156053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.156080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.156226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.156271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.156423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.156450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.156588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.156632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.156793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.156821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.156968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.157002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.157163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.157193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.157304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.157331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.157550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.157579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.157721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.157750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.157875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.157901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.158015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.158042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.158242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.158270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.619 [2024-10-01 01:53:44.158412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.619 [2024-10-01 01:53:44.158439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.619 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.158579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.158605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.158772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.158808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.158948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.158991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.159187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.159224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.159428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.159456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.159582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.159612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.159739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.159769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.159891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.159921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.160054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.160082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.160202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.160229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.160383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.160413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.160563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.160592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.160750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.160777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.160891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.160920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.161048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.161076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.161225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.161254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.161419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.161454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.161573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.161604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.161752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.161780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.161921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.161950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.162093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.162120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.162261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.162303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.162481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.162511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.162679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.162707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.162846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.162873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.163014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.163041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.163184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.163227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.163380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.163415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.163552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.620 [2024-10-01 01:53:44.163583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.620 qpair failed and we were unable to recover it. 00:36:04.620 [2024-10-01 01:53:44.163700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.163727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.163918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.163945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.164093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.164137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.164294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.164321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.164462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.164505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.164659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.164690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.164863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.164890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.165011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.165039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.165157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.165184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.165314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.165341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.165498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.165528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.165710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.165738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.165893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.165924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.166056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.166087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.166217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.166246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.166402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.166429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.166563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.166606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.166770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.166798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.166964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.167016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.167166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.167193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.167330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.167372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.167527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.167557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.167706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.167736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.167910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.167941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.168085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.168114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.168248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.168279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.168469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.168499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.168670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.168697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.168803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.168830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.168978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.169013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.169157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.169187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.169333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.169360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.169522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.169566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.169728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.169758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.621 qpair failed and we were unable to recover it. 00:36:04.621 [2024-10-01 01:53:44.169876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.621 [2024-10-01 01:53:44.169906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.170044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.170078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.170230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.170276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.170429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.170460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.170591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.170620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.170785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.170812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.170918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.170945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.171099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.171129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.171313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.171340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.171477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.171504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.171614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.171639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.171806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.171841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.172032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.172061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.172201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.172229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.172343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.172393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.172525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.172566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.172683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.172709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.172856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.172882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.173009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.173042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.173182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.173209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.173355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.173385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.173549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.173581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.173704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.173731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.173840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.173866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.174007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.174034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.174170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.174196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.174336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.174364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.174495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.174526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.174716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.174746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.174898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.174924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.175079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.175109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.175264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.175294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.175427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.175456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.175615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.175649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.622 [2024-10-01 01:53:44.175794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.622 [2024-10-01 01:53:44.175839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.622 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.175990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.176025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.176174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.176204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.176368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.176395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.176505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.176532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.176675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.176705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.176831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.176860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.177019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.177046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.177148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.177174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.177330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.177376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.177545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.177580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.177725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.177753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.177898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.177941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.178109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.178142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.178309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.178338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.178477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.178505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.178670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.178701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.178825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.178856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.179023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.179051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.179215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.179242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.179391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.179422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.179603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.179630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.179824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.179859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.180049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.180078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.180216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.180249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.180418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.180449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.180577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.180607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.180765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.180795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.180923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.180969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.181139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.181169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.181313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.181339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.181456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.181485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.181619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.181647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.181852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.181886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.623 [2024-10-01 01:53:44.182047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.623 [2024-10-01 01:53:44.182076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.623 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.182238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.182269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.182393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.182432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.182565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.182595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.182749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.182785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.182914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.182945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.183131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.183160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.183268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.183296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.183429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.183456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.183606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.183636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.183786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.183816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.183961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.184006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.184173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.184205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.184341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.184369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.184484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.184515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.184662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.184690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.184880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.184910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.185080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.185115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.185238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.185268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.185409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.185436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.185600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.185632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.185773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.185800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.185966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.185993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.186150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.186187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.186369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.186400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.186528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.186555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.186699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.186727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.186922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.186952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.187113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.187145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.187338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.187368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.187528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.187558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.187732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.187761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.187904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.187946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.188125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.188154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.624 [2024-10-01 01:53:44.188281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.624 [2024-10-01 01:53:44.188326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.624 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.188473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.188505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.188648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.188677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.188840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.188868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.189037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.189085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.189243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.189273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.189444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.189472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.189594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.189624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.189731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.189758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.189926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.189956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.190099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.190135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.190274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.190302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.190438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.190483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.190620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.190651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.190780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.190810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.190971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.191005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.191129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.191158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.191345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.191376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.191540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.191572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.191717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.191747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.191891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.191919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.192090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.192134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.192266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.192296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.192454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.192482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.192607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.192642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.192782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.192810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.192965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.192995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.193145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.193180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.193325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.193352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.193510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.625 [2024-10-01 01:53:44.193540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.625 qpair failed and we were unable to recover it. 00:36:04.625 [2024-10-01 01:53:44.193724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.193757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.193930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.193960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.194109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.194138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.194296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.194326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.194492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.194519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.194662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.194689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.194855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.194887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.195022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.195053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.195234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.195265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.195414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.195443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.195604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.195632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.195735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.195768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.195920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.195952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.196137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.196166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.196360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.196391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.196565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.196594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.196711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.196742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.196893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.196928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.197079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.197107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.197233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.197261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.197375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.197411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.197582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.197608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.197742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.197770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.197939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.197970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.198131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.198163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.198331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.198358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.198470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.198499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.198636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.198664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.198793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.198820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.198941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.198970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.199121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.199149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.199293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.199334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.199498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.199550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.626 qpair failed and we were unable to recover it. 00:36:04.626 [2024-10-01 01:53:44.199717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.626 [2024-10-01 01:53:44.199745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.199946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.199988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.200184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.200214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.200371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.200400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.200541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.200570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.200752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.200797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.200950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.200986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.201174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.201206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.201374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.201401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.201511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.201562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.201724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.201755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.201871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.201901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.202043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.202071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.202203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.202231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.202421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.202449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.202614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.202648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.202853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.202882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.203026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.203054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.203195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.203221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.203429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.203478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.203622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.203650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.203792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.203837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.203981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.204028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.204200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.204227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.204369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.204395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.204556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.204588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.204746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.204776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.204926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.204961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.205103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.205132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.205241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.205268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.205404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.205433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.205623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.205656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.205809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.205837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.205950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.205976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.627 qpair failed and we were unable to recover it. 00:36:04.627 [2024-10-01 01:53:44.206155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.627 [2024-10-01 01:53:44.206202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.206358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.206388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.206547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.206573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.206691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.206738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.206893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.206927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.207108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.207139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.207306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.207334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.207453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.207497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.207678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.207717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.207852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.207884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.208047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.208078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.208196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.208224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.208393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.208437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.208568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.208612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.208753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.208787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.208944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.208973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.209174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.209202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.209322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.209350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.209546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.209573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.209725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.209756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.209915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.209945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.210080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.210113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.210276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.210303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.210445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.210473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.210669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.210698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.210852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.210881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.211025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.211055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.211192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.211222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.211390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.211420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.211582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.211614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.211806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.211833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.211990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.212048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.212231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.212259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.212391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.212424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.628 [2024-10-01 01:53:44.212593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.628 [2024-10-01 01:53:44.212629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.628 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.212760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.212788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.212963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.212994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.213140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.213172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.213332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.213359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.213500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.213542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.213713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.213746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.213870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.213902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.214092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.214120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.214268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.214296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.214437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.214465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.214582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.214609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.214745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.214773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.214960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.214992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.215122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.215154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.215309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.215340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.215470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.215497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.215663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.215706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.215865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.215897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.216068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.216097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.216264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.216291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.216413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.216441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.216555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.216582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.216727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.216754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.216897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.216935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.217061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.217090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.217237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.217264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.217474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.217504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.217666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.217693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.217845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.217883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.218070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.218103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.218268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.218298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.218437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.218471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.218595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.629 [2024-10-01 01:53:44.218622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.629 qpair failed and we were unable to recover it. 00:36:04.629 [2024-10-01 01:53:44.218787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.218814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.218983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.219029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.219182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.219211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.219352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.219396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.219559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.219594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.219721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.219753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.219901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.219929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.220077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.220112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.220238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.220283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.220393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.220421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.220523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.220559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.220736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.220783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.220933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.220964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.221126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.221158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.221298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.221326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.221438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.221466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.221630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.221670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.221832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.221867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.222026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.222054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.222178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.222228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.222356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.222390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.222539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.222575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.222762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.222791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.222906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.222950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.223113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.223144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.223333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.223363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.223482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.223509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.223643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.223673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.223814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.223843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.223981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.224031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.224213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.224241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.224390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.224419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.224602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.224633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.630 [2024-10-01 01:53:44.224796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.630 [2024-10-01 01:53:44.224835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.630 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.225005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.225036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.225172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.225218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.225365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.225395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.225563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.225595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.225779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.225807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.225969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.226008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.226141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.226172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.226325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.226352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.226460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.226493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.226645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.226692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.226828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.226861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.227057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.227096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.227239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.227268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.227400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.227428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.227584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.227616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.227737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.227767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.227899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.227927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.228045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.228077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.228256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.228286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.228462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.228490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.228631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.228660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.228771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.228798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.228909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.228936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.229050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.229081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.229225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.229255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.229375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.229421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.229601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.229633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.229793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.229824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.631 [2024-10-01 01:53:44.229978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.631 [2024-10-01 01:53:44.230014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.631 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.230142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.230194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.230349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.230379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.230556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.230587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.230746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.230774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.230884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.230911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.231068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.231096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.231276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.231307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.231479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.231506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.231624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.231654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.231807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.231837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.231985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.232047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.232239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.232270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.232432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.232463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.232581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.232611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.232726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.232755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.232898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.232926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.233067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.233112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.233265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.233296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.233448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.233479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.233611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.233638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.233782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.233811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.234009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.234042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.234196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.234231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.234387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.234414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.234566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.234595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.234709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.234737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.234888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.234917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.235051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.235080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.235219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.235245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.235395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.235424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.235601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.235640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.235790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.235817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.235926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.235954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.236126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.236158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.236310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.236337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.632 [2024-10-01 01:53:44.236539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.632 [2024-10-01 01:53:44.236566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.632 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.236691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.236721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.236874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.236905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.237065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.237093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.237231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.237258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.237391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.237436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.237568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.237597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.237719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.237755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.237902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.237930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.238049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.238076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.238238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.238267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.238479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.238531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.238667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.238695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.238831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.238881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.239059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.239091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.239268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.239298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.239444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.239475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.239616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.239643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.239798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.239827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.240012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.240046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.240183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.240213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.240408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.240438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.240613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.240642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.240779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.240808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.240964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.241005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.241169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.241198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.241360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.241391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.241580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.241616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.241753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.241781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.241917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.241944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.242125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.242163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.242304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.242334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.242517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.242545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.242653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.242680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.242851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.242881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.243038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.243069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.243220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.633 [2024-10-01 01:53:44.243255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.633 qpair failed and we were unable to recover it. 00:36:04.633 [2024-10-01 01:53:44.243426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.243470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.243590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.243619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.243764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.243794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.243959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.243985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.244145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.244172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.244290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.244319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.244461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.244490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.244632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.244660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.244854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.244885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.245035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.245067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.245211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.245242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.245424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.245452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.245568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.245598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.245737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.245765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.245931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.245976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.246159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.246186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.246304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.246333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.246474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.246503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.246679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.246707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.246872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.246901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.247056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.247088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.247201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.247231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.247360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.247403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.247565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.247594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.247711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.247754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.247936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.247968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.248166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.248197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.248357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.248397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.248530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.248576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.248729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.248759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.248893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.248928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.249102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.249131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.249246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.249273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.249464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.249500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.249704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.249735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.249873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.249903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.250094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.634 [2024-10-01 01:53:44.250125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.634 qpair failed and we were unable to recover it. 00:36:04.634 [2024-10-01 01:53:44.250243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.250273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.250434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.250476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.250680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.250710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.250906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.250937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.251104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.251132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.251254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.251282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.251458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.251485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.251625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.251652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.251800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.251829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.251973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.252037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.252183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.252210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.252369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.252399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.252517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.252548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.252699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.252728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.252886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.252914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.253047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.253094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.253268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.253299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.253456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.253485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.253682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.253710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.253865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.253895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.254093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.254127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.254274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.254302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.254478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.254507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.254653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.254681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.254860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.254888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.255048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.255099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.255262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.255291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.255408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.255435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.255567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.255600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.255738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.255770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.255912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.255939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.256079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.256107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.256287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.256316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.635 qpair failed and we were unable to recover it. 00:36:04.635 [2024-10-01 01:53:44.256457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.635 [2024-10-01 01:53:44.256491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.256678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.256711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.256856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.256882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.257016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.257043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.257185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.257234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.257361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.257390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.257509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.257536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.257691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.257725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.257887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.257917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.258099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.258126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.258265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.258316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.258439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.258470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.258612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.258642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.258801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.258829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.258969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.259026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.259188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.259217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.259393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.259441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.259588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.259617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.259759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.259803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.259984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.260047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.260230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.260261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.260393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.260419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.260590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.260636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.260789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.260821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.260974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.261012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.261209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.261237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.261354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.261381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.261530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.261557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.261757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.261785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.261929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.261957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.262131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.262162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.262344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.262373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.262488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.262516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.262690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.262718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.262909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.262944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.263126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.263154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.263292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.263338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.263493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.263521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.263659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.636 [2024-10-01 01:53:44.263685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.636 qpair failed and we were unable to recover it. 00:36:04.636 [2024-10-01 01:53:44.263848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.263886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.264029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.264061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.264233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.264260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.264444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.264474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.264625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.264654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.264828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.264858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.265003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.265033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.265153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.265180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.265365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.265397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.265541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.265572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.265705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.265732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.265866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.265893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.266029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.266065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.266229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.266257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.266400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.266427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.266568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.266597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.266772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.266798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.266937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.266963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.267169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.267197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.267360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.267392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.267553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.267585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.267779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.267809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.267968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.267994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.268201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.268237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.268364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.268396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.268539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.268567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.268733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.268762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.268881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.268908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.269025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.269057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.269223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.269265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.269415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.269444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.269585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.269628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.269813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.269841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.269979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.270033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.270223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.270251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.270418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.270450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.270644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.270674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.270788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.270821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.637 qpair failed and we were unable to recover it. 00:36:04.637 [2024-10-01 01:53:44.270969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.637 [2024-10-01 01:53:44.271004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.271177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.271208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.271358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.271389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.271569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.271600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.271757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.271791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.271982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.272024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.272151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.272181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.272314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.272347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.272477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.272505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.272637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.272664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.272810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.272858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.273039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.273071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.273203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.273230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.273370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.273399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.273581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.273612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.273733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.273764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.273921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.273948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.274111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.274144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.274293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.274324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.274476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.274506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.274660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.274689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.274833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.274861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.274974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.275008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.275186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.275215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.275387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.275414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.275602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.275633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.275787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.275817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.275966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.276017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.276171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.276200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.276343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.276371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.276512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.276547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.276739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.276771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.276952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.276982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.277183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.277219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.277361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.277388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.277560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.277592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.277769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.277798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.277938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.277987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.278187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.278219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.278377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.638 [2024-10-01 01:53:44.278407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.638 qpair failed and we were unable to recover it. 00:36:04.638 [2024-10-01 01:53:44.278565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.278593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.278742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.278771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.278913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.278941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.279123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.279161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.279340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.279370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.279528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.279558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.279680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.279711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.279857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.279889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.280056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.280085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.280242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.280274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.280441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.280469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.280583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.280611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.280720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.280754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.280897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.280944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.281110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.281143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.281265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.281302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.281470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.281501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.281620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.281664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.281816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.281847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.282026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.282055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.282196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.282224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.282403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.282448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.282649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.282678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.282790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.282818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.282962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.282988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.283149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.283180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.283302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.283332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.283490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.283517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.283654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.283681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.283790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.283818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.283984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.284029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.284191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.284218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.284335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.284362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.284507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.284551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.284702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.284731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.284912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.284941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.285089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.285118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.285282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.285326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.285471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.285501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.285679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.285709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.639 qpair failed and we were unable to recover it. 00:36:04.639 [2024-10-01 01:53:44.285846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.639 [2024-10-01 01:53:44.285873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.286019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.286062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.286190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.286220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.286395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.286426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.286619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.286646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.286764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.286791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.286931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.286964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.287116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.287144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.287258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.287285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.287424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.287451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.287590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.287619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.287780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.287807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.287943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.287970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.288135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.288165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.288317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.288347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.288513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.288559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.288710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.288737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.288879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.288925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.289067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.289097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.289224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.289254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.289417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.289445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.289582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.289627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.289756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.289786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.289936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.289966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.290100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.290128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.290265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.290293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.290484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.290514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.290666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.290695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.290829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.290856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.290967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.290994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.291183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.291214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.291356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.291386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.291511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.291536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.291667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.291709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.291834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.291877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.291990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.292021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.640 qpair failed and we were unable to recover it. 00:36:04.640 [2024-10-01 01:53:44.292134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.640 [2024-10-01 01:53:44.292161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.292298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.292341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.292458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.292488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.292634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.292663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.292792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.292819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.293037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.293065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.293174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.293200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.293308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.293335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.293486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.293513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.293698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.293727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.293891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.293919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.294080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.294126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.294279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.294305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.294442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.294485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.294636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.294666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.294817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.294848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.294979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.295011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.295157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.295185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.295363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.295393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.295525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.295554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.295668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.295694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.295837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.295880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.296067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.296095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.296227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.296255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.296390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.296417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.296554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.296599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.296785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.296815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.296949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.296977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.297101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.297128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.297241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.297268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.297418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.297448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.297573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.297602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.297766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.297792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.297899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.297925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.298077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.298226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.298399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.298535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.298687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.298861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.298969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.641 [2024-10-01 01:53:44.299002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.641 qpair failed and we were unable to recover it. 00:36:04.641 [2024-10-01 01:53:44.299117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.299143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.299334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.299364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.299515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.299544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.299700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.299727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.299838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.299865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.299994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.300033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.300217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.300248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.300416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.300443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.300558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.300586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.300747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.300777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.300938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.300968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.301112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.301139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.301277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.301322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.301484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.301514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.301665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.301709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.301825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.301852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.302008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.302036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.302158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.302201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.302353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.302384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.302543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.302570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.302688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.302715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.302855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.302882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.303016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.303044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.303182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.303209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.303347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.303374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.303506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.303534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.303670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.303713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.303875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.303902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.304042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.304205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.304385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.304534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.304679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.304846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.304984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.305124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.305249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.305416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.305600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.305754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.642 qpair failed and we were unable to recover it. 00:36:04.642 [2024-10-01 01:53:44.305922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.642 [2024-10-01 01:53:44.305949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.306110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.306155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.306281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.306310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.306444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.306471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.306606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.306633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.306771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.306798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.306909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.306937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.307083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.307111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.307268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.307298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.307412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.307442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.307622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.307653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.307793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.307820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.307959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.307985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.308151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.308181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.308305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.308335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.308527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.308555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.308727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.308757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.308874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.308905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.309058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.309088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.309235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.309262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.309402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.309447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.309623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.309653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.309829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.309859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.309987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.310021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.310169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.310215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.310437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.310467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.310634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.310664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.310801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.310828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.310963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.310989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.311185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.311213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.311356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.311383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.311503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.311530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.311662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.311689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.311799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.311831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.312032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.312063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.312203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.312230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.312340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.312368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.312507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.312534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.312697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.312726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.312884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.312911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.313080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.643 [2024-10-01 01:53:44.313124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.643 qpair failed and we were unable to recover it. 00:36:04.643 [2024-10-01 01:53:44.313260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.313287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.313454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.313482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.313632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.313659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.313800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.313827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.313988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.314021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.314179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.314208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.314376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.314403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.314507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.314535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.314700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.314728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.314910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.314937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.315042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.315068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.315177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.315204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.315341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.315371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.315522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.315552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.315713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.315740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.315880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.315924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.316148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.316178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.316309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.316339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.316527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.316554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.316712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.316742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.316893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.316924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.317049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.317080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.317215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.317242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.317377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.317403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.317570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.317600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.317773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.317803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.317934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.317961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.318102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.318129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.318307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.318334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.318443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.318470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.318586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.318613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.318719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.318746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.318882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.318913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.319033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.319063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.319225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.319253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.319387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.319431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.644 qpair failed and we were unable to recover it. 00:36:04.644 [2024-10-01 01:53:44.319592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.644 [2024-10-01 01:53:44.319619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.319739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.319767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.319942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.319969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.320098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.320143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.320309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.320339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.320516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.320564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.320725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.320752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.320857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.320884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.321004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.321032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.321209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.321236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.321379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.321407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.321590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.321620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.321740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.321770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.321895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.321924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.322057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.322085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.322229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.322256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.322425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.322455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.322569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.322599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.322728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.322756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.322866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.322894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.323053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.323084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.323248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.323276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.323439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.323466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.323664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.323709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.323834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.323866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.324008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.324052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.324223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.324251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.324391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.324438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.324617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.324648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.324758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.324788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.324922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.324950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.325108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.325154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.325309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.325339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.325447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.325477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.325612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.325639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.325776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.325803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.325965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.326006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.326191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.326221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.326389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.326416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.326638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.326665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.326838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.326868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.327031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.327059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.327167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.327194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.645 qpair failed and we were unable to recover it. 00:36:04.645 [2024-10-01 01:53:44.327310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.645 [2024-10-01 01:53:44.327337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.327503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.327530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.327666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.327696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.327852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.327879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.327992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.328039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.328185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.328215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.328378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.328404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.328520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.328547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.328685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.328712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.328871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.328901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.329063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.329094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.329228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.329255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.329365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.329392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.329526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.329556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.329718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.329745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.329885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.329912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.330049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.330240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.330403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.330566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.330707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.330866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.330995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.331047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.331214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.331242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.331448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.331500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.331727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.331758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.331883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.331914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.332056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.332084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.332225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.332267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.332489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.332519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.332673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.332702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.332830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.332857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.333017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.333045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.333226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.333260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.333445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.333475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.333605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.333632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.333862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.333891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.334113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.334143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.334285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.334312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.334424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.334450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.334575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.334602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.334754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.334784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.334935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.334964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.646 [2024-10-01 01:53:44.335129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.646 [2024-10-01 01:53:44.335156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.646 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.335294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.335321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.335526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.335553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.335713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.335740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.335853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.335880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.336020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.336047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.336192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.336221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.336372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.336403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.336562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.336588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.336698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.336725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.336892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.336921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.337047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.337078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.337216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.337243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.337372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.337399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.337569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.337596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.337706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.337734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.337854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.337880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.338035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.338095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.338262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.338308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.338428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.338457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.338606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.338634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.338789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.338821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.338975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.339021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.339206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.339237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.339402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.339431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.339539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.339567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.339709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.339744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.339957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.339988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.340172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.340200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.340365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.340425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.340568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.340606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.340732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.340764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.340929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.340958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.341109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.341157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.341307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.341345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.341528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.341556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.341692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.341719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.341857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.341885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.342033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.342064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.342194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.342226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.342389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.342418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.342572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.342605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.342758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.342790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.647 [2024-10-01 01:53:44.342966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.647 [2024-10-01 01:53:44.343005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.647 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.343151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.343179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.343349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.343376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.343588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.343618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.343761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.343806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.344004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.344035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.344214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.344244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.344431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.344458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.344619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.344665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.344836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.344865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.345008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.345053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.345238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.345269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.345389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.345419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.345548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.345576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.345715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.345765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.345903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.345934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.346096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.346127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.346265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.346293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.346442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.346470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.346611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.346639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.346835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.346867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.347007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.347034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.347139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.347166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.347327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.347365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.347500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.347530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.347696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.347725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.347841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.347877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.348020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.348054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.348236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.348263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.348410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.348438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.348597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.348626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.348793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.348819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.348992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.349029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.349208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.349238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.349418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.349456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.349613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.349643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.349789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.349817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.349923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.349954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.350092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.350121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.350303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.350331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.648 [2024-10-01 01:53:44.350454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.648 [2024-10-01 01:53:44.350481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.648 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.350658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.350686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.350799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.350826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.351009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.351038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.351175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.351203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.351321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.351351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.351490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.351517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.351678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.351708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.351860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.351891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.352017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.352045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.352157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.352185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.352376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.352408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.352597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.352630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.352783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.352811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.352924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.352968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.353152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.353181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.353316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.353344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.353525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.353552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.353694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.353721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.353919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.353950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.354087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.354118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.354318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.354346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.354528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.354558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.354725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.354754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.354865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.354896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.355049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.355077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.355259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.355291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.355415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.355444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.355579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.355609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.355777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.355810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.355951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.355978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.356138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.356171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.356341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.356373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.356540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.356570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.356722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.356753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.356910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.356937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.357075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.357104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.357223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.357251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.357363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.357390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.357519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.357552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.357673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.357707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.357906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.357934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.358046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.358075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.358250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.358282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.358398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.358429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.358566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.649 [2024-10-01 01:53:44.358594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.649 qpair failed and we were unable to recover it. 00:36:04.649 [2024-10-01 01:53:44.358703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.358730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.358924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.358954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.359118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.359149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.359318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.359348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.359507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.359537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.359727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.359761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.359905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.359934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.360085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.360113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.360302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.360340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.360499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.360532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.360687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.360720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.360863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.360897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.361042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.361090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.361245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.361284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.361439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.361471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.361608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.361636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.361751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.361785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.361956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.361986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.362115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.362146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.362280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.362307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.362473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.362519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.362674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.362705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.362864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.362893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.363025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.363052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.363159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.363186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.363351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.363381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.363575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.363606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.363770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.363797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.363937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.363982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.364141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.364171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.364330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.364362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.364529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.364557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.364675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.364718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.364872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.364903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.365046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.365085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.365265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.365293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.365412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.365439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.365609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.365637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.365775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.365803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.365939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.365975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.366107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.366135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.366276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.366303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.366431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.366462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.366637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.366664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.366801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.650 [2024-10-01 01:53:44.366844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.650 qpair failed and we were unable to recover it. 00:36:04.650 [2024-10-01 01:53:44.366990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.367035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.367181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.367211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.367354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.367381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.367495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.367539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.367657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.367683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.367851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.367881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.368022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.368051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.368195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.368223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.368366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.368409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.368542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.368573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.368759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.368788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.368927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.368954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.369104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.369134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.369294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.369346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.369532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.369559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.369701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.369729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.369881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.369926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.370086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.370119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.370247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.370273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.370386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.370419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.370578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.370609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.370746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.370776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.370942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.370973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.371122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.371151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.371315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.371361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.371515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.371546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.371723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.371751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.371910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.371942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.372109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.372140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.372289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.372318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.372478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.372505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.372690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.372725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.372881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.372912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.373068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.373100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.373244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.373273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.373468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.373500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.373657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.373695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.373888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.373915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.374026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.374053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.374195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.374248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.374417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.374447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.374613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.374657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.374807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.374842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.374986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.375052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.375191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.375221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.651 [2024-10-01 01:53:44.375381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.651 [2024-10-01 01:53:44.375413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.651 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.375599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.375628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.375747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.375781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.375926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.375955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.376151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.376183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.376312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.376345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.376467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.376496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.376658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.376701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.376828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.376866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.377037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.377066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.377202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.377248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.377423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.377454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.377625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.377656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.377820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.377847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.377994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.378041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.378179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.378227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.378419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.378451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.378618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.378645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.378760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.378806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.378939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.378969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.379129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.379159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.379338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.379365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.379512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.379542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.379687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.379717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.379854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.379882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.380050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.380077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.380240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.380269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.380430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.380457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.380622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.380649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.380825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.380852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.380959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.380986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.381231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.381261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.381390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.381420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.381544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.381570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.381686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.381713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.381884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.381913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.382133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.382161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.382324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.382351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.382483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.382522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.382663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.382693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.382845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.382874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.383013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.652 [2024-10-01 01:53:44.383041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.652 qpair failed and we were unable to recover it. 00:36:04.652 [2024-10-01 01:53:44.383206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.383233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.383358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.383387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.383547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.383574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.383713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.383741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.383905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.383950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.384106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.384136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.384268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.384297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.384466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.384492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.384606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.384633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.384741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.384768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.384933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.384963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.385126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.385154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.385293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.385320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.385429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.385456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.385608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.385638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.385771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.385799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.386025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.386055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.386240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.386270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.386426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.386456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.386621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.386648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.386787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.386832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.386993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.387025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.387190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.387218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.387341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.387368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.387511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.387554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.387679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.387719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.387845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.387876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.388095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.388122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.388301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.388330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.388523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.388550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.388665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.388691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.388866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.388893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.389011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.389047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.389182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.389212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.389360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.389390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.389551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.389577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.389711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.389742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.389906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.389936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.390121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.390148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.390286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.390313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.390452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.390497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.390638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.390667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.390784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.390814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.390953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.390979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.391108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.391137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.391323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.391352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.391472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.391502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.653 [2024-10-01 01:53:44.391664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.653 [2024-10-01 01:53:44.391691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.653 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.391865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.391895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.392040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.392071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.392205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.392236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.392369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.392397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.392509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.392536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.392705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.392735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.392885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.392914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.393042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.393069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.393208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.393252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.393406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.393435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.393596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.393625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.393757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.393784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.393941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.393968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.394106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.394134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.394258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.394301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.394462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.394489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.394628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.394672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.394797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.394827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.394943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.394973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.395156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.395183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.395283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.395310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.395500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.395530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.395657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.395687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.395848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.395875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.395991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.396190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.396371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.396517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.396648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.396784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.396939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.396966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.397092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.397119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.397255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.397298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.397448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.397478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.397630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.397659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.397789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.397816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.397925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.397953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.398096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.398125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.398286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.398316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.398448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.398475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.398688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.398715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.398908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.398935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.399051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.399080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.399246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.399272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.399405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.399431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.399615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.399642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.399776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.654 [2024-10-01 01:53:44.399804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.654 qpair failed and we were unable to recover it. 00:36:04.654 [2024-10-01 01:53:44.400018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.400045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.400214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.400242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.400382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.400409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.400603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.400633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.400768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.400794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.400932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.400958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.401107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.401135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.401287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.401313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.401470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.401497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.401659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.401686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.401819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.401846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.401983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.402020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.402178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.402205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.402343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.402388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.402512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.402542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.402666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.402696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.402848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.402874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.403964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.403994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.404159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.404186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.404365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.404395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.404546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.404576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.404728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.404757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.404887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.404914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.405060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.405252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.405389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.405547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.405687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.405854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.405989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.406026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.406166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.406192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.406330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.406357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.406523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.406553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.406699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.406728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.406871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.406898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.407026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.407054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.407207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.407237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.407361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.407404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.407572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.407599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.407760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.407790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.407925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.407952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.655 [2024-10-01 01:53:44.408064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.655 [2024-10-01 01:53:44.408091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.655 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.408241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.408269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.408405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.408431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.408536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.408563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.408676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.408703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.408820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.408847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.408964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.408990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.409175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.409204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.409353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.409383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.409574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.409600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.409705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.409733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.409873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.409899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.410042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.410070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.410187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.410213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.410352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.410400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.410554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.410585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.410712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.410743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.410895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.410922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.411087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.411131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.411321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.411348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.411467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.411494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.411628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.411655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.411783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.411826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.411974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.412008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.412158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.412187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.412322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.412349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.412488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.412514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.412644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.412670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.656 [2024-10-01 01:53:44.412821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.656 [2024-10-01 01:53:44.412865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.656 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.413052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.413079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.413195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.413238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.413361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.413391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.413541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.413570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.413788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.413815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.413928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.413954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.414123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.414166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.414319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.414348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.414523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.414549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.414718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.414748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.414891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.414920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.415089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.415118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.415262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.415289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.415418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.415463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.415630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.415657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.415764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.415791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.415905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.415931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.416064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.416091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.416219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.416250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.416391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.416421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.416579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.416607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.416722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.416748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.416888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.416917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.417041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.417071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.417223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.417250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.417357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.417389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.417545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.417575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.417730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.417759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.417939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.417966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.418129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.418159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.418327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.418353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.657 qpair failed and we were unable to recover it. 00:36:04.657 [2024-10-01 01:53:44.418464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.657 [2024-10-01 01:53:44.418491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.418607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.418635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.418767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.418793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.418923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.418953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.419119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.419149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.419306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.419333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.419482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.419527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.419633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.419663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.419888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.419918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.420060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.420087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.420230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.420257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.420423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.420450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.420589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.420615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.420757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.420784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.421001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.421029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.421177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.421207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.421353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.421382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.421520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.421546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.421687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.421714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.421849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.421880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.422026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.422056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.422216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.422243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.422437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.422466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.422643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.422672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.422799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.422830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.422961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.422988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.423135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.423162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.423321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.423351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.423508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.423536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.423749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.423776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.423932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.423962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.424101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.424129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.424266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.424292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.658 [2024-10-01 01:53:44.424439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.658 [2024-10-01 01:53:44.424465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.658 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.424604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.424635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.424768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.424797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.424951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.424981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.425130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.425157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.425321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.425347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.425566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.425595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.425750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.425780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.425938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.425964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.426108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.426156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.426305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.426334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.426486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.426516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.426666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.426694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.426810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.426836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.427024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.427054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.427212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.427242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.427366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.427392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.427526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.427552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.427683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.427714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.427887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.427917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.428072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.428099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.428217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.428244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.428426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.428453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.428567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.428594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.428747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.428774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.659 [2024-10-01 01:53:44.428874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.659 [2024-10-01 01:53:44.428901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.659 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.429135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.429166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.429327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.429355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.429502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.429529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.429712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.429741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.429890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.429920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.430079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.430109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.430240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.430266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.430393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.430420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.430594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.430623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.430787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.430817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.430969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.430996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.431138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.431182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.431329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.431358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.431472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.431501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.431634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.431660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.431823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.431869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.431989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.432024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.432208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.432234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.432378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.432403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.432531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.432574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.432744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.432768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.432876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.432903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.433049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.433076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.433253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.433283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.948 qpair failed and we were unable to recover it. 00:36:04.948 [2024-10-01 01:53:44.433432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.948 [2024-10-01 01:53:44.433462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.433610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.433640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.433768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.433795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.433902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.433929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.434080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.434108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.434256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.434301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.434423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.434451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.434563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.434590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.434707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.434734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.434934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.434964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.435141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.435169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.435338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.435383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.435562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.435589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.435701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.435729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.435860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.435887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.436024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.436069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.436218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.436252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.436383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.436409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.436629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.436656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.436855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.436882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.437018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.437050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.437199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.437225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.437377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.437404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.437535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.437562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.437753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.437783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.437911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.437940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.438087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.438114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.438253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.438280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.438515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.438544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.438700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.438730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.438857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.438884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.438993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.439045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.439216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.439246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.439419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.439449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.439608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.439634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.439825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.439854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.439990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.949 [2024-10-01 01:53:44.440035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.949 qpair failed and we were unable to recover it. 00:36:04.949 [2024-10-01 01:53:44.440223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.440250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.440386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.440412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.440544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.440587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.440746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.440774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.440889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.440916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.441056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.441084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.441222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.441267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.441422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.441452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.441598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.441641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.441756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.441783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.441912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.441939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.442131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.442157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.442307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.442349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.442489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.442516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.442628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.442654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.442779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.442808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.442926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.442956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.443124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.443151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.443284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.443311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.443476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.443506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.443623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.443653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.443805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.443832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.443990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.444026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.444150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.444179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.444331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.444361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.444545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.444573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.444676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.444703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.444858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.444885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.445019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.445046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.445184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.445211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.445318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.445345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.445515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.445544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.445698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.445727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.445894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.445921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.446027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.446054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.446222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.446250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.446444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.950 [2024-10-01 01:53:44.446471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.950 qpair failed and we were unable to recover it. 00:36:04.950 [2024-10-01 01:53:44.446583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.446610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.446745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.446772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.446953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.446983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.447144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.447175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.447334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.447361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.447492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.447538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.447702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.447729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.447877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.447904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.448051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.448079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.448236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.448265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.448416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.448446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.448581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.448610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.448772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.448798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.448937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.448982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.449143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.449172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.449356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.449383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.449492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.449519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.449656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.449683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.449867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.449894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.450038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.450065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.450229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.450256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.450385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.450412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.450520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.450547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.450710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.450737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.450912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.450943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.451081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.451126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.451259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.451287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.451450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.451477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.451623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.451649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.451788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.451815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.451969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.452013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.452208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.452234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.452343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.452370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.452535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.452562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.452690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.452720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.452875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.452901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.951 qpair failed and we were unable to recover it. 00:36:04.951 [2024-10-01 01:53:44.453050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.951 [2024-10-01 01:53:44.453077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.453215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.453254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.453446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.453476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.453630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.453659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.453877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.453904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.454067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.454096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.454219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.454260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.454440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.454470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.454630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.454656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.454792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.454819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.454938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.454965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.455126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.455157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.455320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.455347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.455484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.455511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.455627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.455654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.455780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.455810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.455965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.455992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.456147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.456192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.456328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.456357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.456498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.456527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.456685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.456712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.456929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.456959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.457100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.457127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.457246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.457282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.457429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.457456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.457592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.457634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.457823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.457849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.458013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.458046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.458151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.458182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.458318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.458345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.458508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.458538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.458692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.458721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.458875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.458902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.459048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.459090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.952 [2024-10-01 01:53:44.459253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.952 [2024-10-01 01:53:44.459281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.952 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.459421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.459449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.459614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.459640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.459858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.459887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.460041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.460072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.460233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.460265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.460405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.460432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.460569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.460611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.460769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.460799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.460973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.461013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.461244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.461279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.461396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.461426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.461566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.461596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.461751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.461781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.461964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.461991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.462186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.462216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.462376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.462406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.462612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.462663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.462824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.462851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.462956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.462984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.463145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.463172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.463409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.463460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.463617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.463644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.463806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.463851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.463972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.464011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.464136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.464166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.464333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.464360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.464528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.464555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.464748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.464775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.464913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.464957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.465088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.465115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.465251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.465282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.465409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.465439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.953 [2024-10-01 01:53:44.465613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.953 [2024-10-01 01:53:44.465642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.953 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.465774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.465805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.465912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.465939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.466076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.466103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.466264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.466307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.466496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.466522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.466672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.466702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.466818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.466848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.467035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.467062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.467195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.467221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.467413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.467443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.467589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.467619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.467771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.467802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.467930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.467957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.468086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.468113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.468245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.468271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.468451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.468481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.468665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.468691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.468806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.468833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.468936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.468963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.469150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.469179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.469350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.469376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.469486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.469512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.469649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.469675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.469841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.469867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.469974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.470008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.470160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.470206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.470379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.470408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.470600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.470628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.470761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.470789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.470960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.470987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.471134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.471176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.471320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.471350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.471509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.471537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.471682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.471710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.471820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.471848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.472014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.472041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.472193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.954 [2024-10-01 01:53:44.472222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.954 qpair failed and we were unable to recover it. 00:36:04.954 [2024-10-01 01:53:44.472347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.472375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.472504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.472531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.472684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.472714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.472881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.472916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.473087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.473114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.473275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.473305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.473493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.473523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.473676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.473706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.473860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.473887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.474041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.474087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.474202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.474231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.474381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.474409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.474521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.474548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.474662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.474690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.474848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.474878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.475041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.475071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.475230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.475260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.475414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.475444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.475600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.475627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.475763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.475789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.475984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.476030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.476168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.476197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.476357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.476386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.476593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.476657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.476807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.476834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.476937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.476964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.477116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.477143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.477312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.477341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.477521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.477548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.477681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.477726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.477889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.477916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.478062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.478089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.478265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.478292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.478399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.478427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.478589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.478618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.478770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.478799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.955 [2024-10-01 01:53:44.478955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.955 [2024-10-01 01:53:44.478981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.955 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.479134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.479178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.479329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.479358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.479524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.479552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.479691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.479717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.479896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.479926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.480150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.480181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.480359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.480394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.480561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.480587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.480699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.480726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.480905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.480934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.481111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.481141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.481331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.481357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.481465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.481510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.481693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.481720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.481884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.481927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.482097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.482124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.482228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.482279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.482408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.482439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.482657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.482709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.482878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.482905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.483105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.483135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.483320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.483349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.483499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.483529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.483713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.483740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.483852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.483894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.484052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.484082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.484233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.484266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.484448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.484475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.484592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.484619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.484752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.484778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.484969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.485004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.485227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.485261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.485421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.485450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.485632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.485661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.485837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.485867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.486029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.486061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.486219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.956 [2024-10-01 01:53:44.486245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.956 qpair failed and we were unable to recover it. 00:36:04.956 [2024-10-01 01:53:44.486426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.486453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.486589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.486616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.486778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.486804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.486948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.486995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.487236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.487265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.487477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.487530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.487716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.487743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.487894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.487924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.488101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.488131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.488310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.488372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.488528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.488555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.488735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.488764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.488919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.488948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.489136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.489166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.489297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.489323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.489486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.489512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.489704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.489733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.489854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.489898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.490064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.490091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.490246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.490276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.490449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.490479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.490591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.490620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.490781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.490807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.490965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.490995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.491175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.491201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.491333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.491360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.491502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.491530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.491694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.491721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.491959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.491988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.492157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.492187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.492347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.492375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.492506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.492550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.492724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.957 [2024-10-01 01:53:44.492753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.957 qpair failed and we were unable to recover it. 00:36:04.957 [2024-10-01 01:53:44.492906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.492936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.493071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.493099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.493256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.493282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.493424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.493454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.493604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.493634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.493784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.493811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.493947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.493974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.494157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.494186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.494298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.494327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.494507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.494534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.494695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.494724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.494874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.494905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.495053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.495084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.495221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.495248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.495360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.495386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.495549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.495578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.495714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.495746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.495882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.495909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.496073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.496119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.496314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.496340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.496478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.496505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.496658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.496685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.496797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.496824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.496930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.496957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.497077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.497104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.497249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.497276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.497433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.497478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.497625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.497655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.497834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.497864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.498045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.498188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.498343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.498551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.498719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.498876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.498988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.499023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.499161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.958 [2024-10-01 01:53:44.499188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.958 qpair failed and we were unable to recover it. 00:36:04.958 [2024-10-01 01:53:44.499360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.499387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.499490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.499517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.499676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.499705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.499866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.499893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.500035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.500062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.500163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.500190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.500331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.500359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.500518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.500563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.500746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.500772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.500990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.501028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.501215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.501242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.501383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.501426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.501585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.501611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.501790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.501820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.502007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.502037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.502185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.502216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.502374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.502401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.502515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.502559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.502712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.502741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.502915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.502949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.503097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.503124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.503241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.503267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.503404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.503431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.503580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.503610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.503763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.503790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.503902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.503928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.504074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.504105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.504284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.504314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.504461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.504488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.504637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.504680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.504835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.504865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.504982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.505020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.505169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.505196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.505350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.505380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.505489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.505518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.505695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.505725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.505874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.959 [2024-10-01 01:53:44.505901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.959 qpair failed and we were unable to recover it. 00:36:04.959 [2024-10-01 01:53:44.506046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.506090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.506238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.506268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.506385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.506415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.506555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.506581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.506681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.506708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.506855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.506885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.507061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.507092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.507241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.507268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.507434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.507478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.507627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.507657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.507808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.507838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.507971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.508011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.508116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.508143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.508340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.508370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.508636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.508663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.508802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.508829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.508971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.509023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.509185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.509212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.509352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.509379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.509545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.509572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.509678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.509705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.509818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.509844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.509981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.510019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.510142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.510169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.510319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.510346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.510485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.510527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.510685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.510715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.510872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.510899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.511008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.511036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.511184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.511210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.511398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.511424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.511554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.511581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.511760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.511790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.511929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.511959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.512136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.512166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.512303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.512330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.512501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.960 [2024-10-01 01:53:44.512528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.960 qpair failed and we were unable to recover it. 00:36:04.960 [2024-10-01 01:53:44.512650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.512680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.512849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.512875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.513017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.513055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.513202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.513232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.513392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.513421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.513573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.513603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.513723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.513750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.513884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.513911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.514051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.514079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.514198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.514224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.514407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.514448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.514592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.514637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.514840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.514884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.515030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.515069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.515189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.515216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.515388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.515415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.515573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.515604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.515739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.515784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.515933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.515963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.516108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.516134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.516279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.516306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.516535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.516587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.516737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.516789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.516942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.516972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.517125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.517153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.517272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.517300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.517437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.517464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.517591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.517621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.517752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.517797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.517982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.518017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.518172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.518199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.518324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.518354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.518491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.518535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.518690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.518719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.518862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.518892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.519046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.519074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.519209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.961 [2024-10-01 01:53:44.519236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.961 qpair failed and we were unable to recover it. 00:36:04.961 [2024-10-01 01:53:44.519385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.519415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.519568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.519600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.519755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.519790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.519925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.519952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.520129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.520156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.520261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.520294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.520436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.520463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.520647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.520674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.520776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.520802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.520937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.520966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.521158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.521185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.521306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.521332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.521441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.521467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.521629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.521659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.521788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.521816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.522004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.522031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.522167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.522194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.522365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.522392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.522530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.522557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.522662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.522689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.522830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.522857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.523021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.523051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.523181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.523212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.523370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.523396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.523528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.523553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.523686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.523713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.523878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.523907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.524092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.524119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.524236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.524262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.524374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.524403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.524586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.524615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.524777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.524804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.524944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.524970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.525116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.962 [2024-10-01 01:53:44.525142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.962 qpair failed and we were unable to recover it. 00:36:04.962 [2024-10-01 01:53:44.525306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.525349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.525480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.525506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.525646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.525672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.525832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.525858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.526934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.526960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.527141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.527168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.527344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.527374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.527507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.527535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.527671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.527697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.527874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.527919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.528105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.528140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.528280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.528308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.528443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.528470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.528713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.528768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.528966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.528993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.529178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.529205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.529361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.529398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.529605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.529633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.529812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.529841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.530003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.530031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.530173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.530201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.530425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.530453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.530587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.530632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.530789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.530816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.530925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.530952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.531127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.531155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.531269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.531296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.531476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.531503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.531664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.531692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.531854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.531884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.532042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.532073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.963 qpair failed and we were unable to recover it. 00:36:04.963 [2024-10-01 01:53:44.532208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.963 [2024-10-01 01:53:44.532235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.532375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.532401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.532549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.532579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.532756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.532786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.532938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.532965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.533120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.533148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.533296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.533323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.533463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.533490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.533653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.533681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.533840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.533870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.534017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.534051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.534202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.534233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.534413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.534440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.534576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.534620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.534735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.534765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.534918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.534947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.535150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.535177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.535302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.535329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.535440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.535467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.535641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.535671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.535797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.535825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.535989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.536039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.536194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.536221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.536414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.536444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.536600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.536627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.536759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.536811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.537015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.537052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.537213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.537239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.537387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.537414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.537530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.537557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.537725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.537752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.537917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.537946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.538110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.538137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.538273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.538315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.538441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.538471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.538806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.538839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.539023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.539062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.964 [2024-10-01 01:53:44.539175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.964 [2024-10-01 01:53:44.539219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.964 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.539397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.539427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.539584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.539615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.539743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.539769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.539881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.539909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.540089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.540120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.540259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.540295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.540419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.540447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.540615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.540642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.540835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.540865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.541012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.541051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.541206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.541234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.541349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.541377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.541507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.541537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.541684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.541714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.541883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.541910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.542052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.542080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.542213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.542264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.542447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.542477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.542631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.542658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.542797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.542846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.543025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.543062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.543212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.543242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.543426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.543453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.543596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.543623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.543763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.543804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.543992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.544031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.544143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.544170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.544280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.544317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.544492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.544522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.544660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.544687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.544858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.544885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.545045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.545076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.545266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.545294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.545431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.545459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.545563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.545590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.545696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.545723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.965 [2024-10-01 01:53:44.545837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.965 [2024-10-01 01:53:44.545865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.965 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.546037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.546082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.546268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.546295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.546406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.546433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.546582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.546612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.546801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.546831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.546969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.547102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.547301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.547491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.547645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.547782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.547916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.547943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.548118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.548150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.548284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.548311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.548501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.548531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.548679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.548709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.548872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.548899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.549012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.549051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.549190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.549217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.549376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.549406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.549549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.549579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.549768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.549795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.549943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.549973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.550153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.550180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.550318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.550345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.550481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.550509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.550626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.550670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.550782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.550812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.550960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.550990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.551141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.551168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.551314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.551363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.551508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.551539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.551681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.551711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.551898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.551925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.552093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.552120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.966 qpair failed and we were unable to recover it. 00:36:04.966 [2024-10-01 01:53:44.552235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.966 [2024-10-01 01:53:44.552261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.552388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.552417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.552577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.552604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.552741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.552768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.552934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.552965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.553167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.553197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.553336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.553364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.553481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.553508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.553636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.553666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.553842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.553870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.553986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.554021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.554158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.554184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.554325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.554352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.554481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.554511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.554691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.554718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.554821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.554864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.555045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.555076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.555226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.555264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.555436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.555463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.555591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.555634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.555744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.555774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.555893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.555923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.556089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.556117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.556300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.556330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.556487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.556517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.556661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.556691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.556867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.556894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.557065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.557095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.557210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.557239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.557388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.557420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.557589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.557616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.557753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.557783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.557934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.557962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.558106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.558151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.558306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.967 [2024-10-01 01:53:44.558332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.967 qpair failed and we were unable to recover it. 00:36:04.967 [2024-10-01 01:53:44.558472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.558521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.558701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.558731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.558886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.558916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.559075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.559102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.559241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.559291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.559443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.559473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.559625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.559655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.559813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.559840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.560020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.560052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.560227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.560256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.560415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.560445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.560605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.560632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.560742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.560769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.560904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.560932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.561076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.561121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.561284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.561311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.561412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.561440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.561604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.561635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.561754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.561784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.561936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.561964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.562107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.562150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.562335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.562362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.562532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.562578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.562730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.562757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.562898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.562925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.563035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.563062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.563230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.563265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.563399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.563426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.563559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.563604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.563793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.563820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.564008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.564049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.564178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.564205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.564367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.564394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.564518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.564548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.564721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.564751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.564868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.564895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.565016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.968 [2024-10-01 01:53:44.565050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.968 qpair failed and we were unable to recover it. 00:36:04.968 [2024-10-01 01:53:44.565190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.565216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.565349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.565380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.565542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.565569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.565750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.565784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.565930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.565960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.566128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.566158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.566319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.566346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.566455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.566482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.566648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.566678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.566802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.566845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.566981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.567014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.567131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.567175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.567321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.567348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.567513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.567540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.567690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.567717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.567847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.567874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.568009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.568050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.568248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.568278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.568417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.568444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.568578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.568623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.568770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.568800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.568949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.568979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.569134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.569161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.569270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.569297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.569429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.569456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.569593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.569623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.569778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.569805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.569940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.569967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.570154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.570184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.570371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.570398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.570514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.570541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.570670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.570697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.570829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.570859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.571050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.571077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.571186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.571212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.571357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.571400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.571557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.969 [2024-10-01 01:53:44.571587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.969 qpair failed and we were unable to recover it. 00:36:04.969 [2024-10-01 01:53:44.571736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.571766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.571917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.571945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.572059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.572086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.572222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.572249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.572406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.572433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.572570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.572598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.572747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.572778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.572936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.572966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.573130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.573161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.573305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.573331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.573443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.573470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.573627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.573657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.573813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.573840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.573977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.574169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.574342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.574480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.574636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.574798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.574935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.574962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.575142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.575172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.575297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.575324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.575464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.575491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.575633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.575662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.575840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.575870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.576016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.576053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.576165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.576192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.576363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.576392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.576544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.576574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.576705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.576732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.576870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.576897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.577031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.577059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.577195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.577222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.577328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.577359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.577522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.577549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.577673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.577718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.577837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.577864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.970 qpair failed and we were unable to recover it. 00:36:04.970 [2024-10-01 01:53:44.578010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.970 [2024-10-01 01:53:44.578046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.578155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.578200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.578377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.578404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.578539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.578566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.578726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.578752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.578862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.578889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.579043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.579074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.579248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.579278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.579432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.579459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.579565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.579592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.579768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.579796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.579934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.579961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.580106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.580133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.580268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.580316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.580432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.580462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.580611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.580641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.580824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.580852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.580970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.581046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.581199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.581229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.581374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.581404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.581568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.581595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.581704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.581732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.581924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.581953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.582131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.582159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.582272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.582298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.582441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.582486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.582642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.582673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.582829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.582859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.583010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.583048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.583153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.583180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.583360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.583390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.583519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.583548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.583677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.583705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.583823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.971 [2024-10-01 01:53:44.583851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.971 qpair failed and we were unable to recover it. 00:36:04.971 [2024-10-01 01:53:44.584024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.584057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.584218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.584260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.584401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.584433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.584571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.584599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.584787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.584816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.584946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.584975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.585126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.585152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.585271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.585298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.585504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.585530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.585672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.585699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.585803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.585831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.585976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.586024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.586157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.586187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.586370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.586400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.586560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.586587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.586730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.586773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.586901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.586931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.587055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.587083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.587218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.587243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.587370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.587399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.587574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.587602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.587768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.587813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.587943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.587971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.588100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.588127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.588258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.588285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.588417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.588445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.588582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.588609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.588727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.588754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.588885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.588911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.589064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.589266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.589407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.589538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.589706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.589865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.589972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.590025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.590149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.972 [2024-10-01 01:53:44.590179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.972 qpair failed and we were unable to recover it. 00:36:04.972 [2024-10-01 01:53:44.590323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.590353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.590534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.590560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.590696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.590741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.590869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.590899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.591044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.591074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.591242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.591274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.591388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.591415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.591549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.591578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.591698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.591728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.591912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.591939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.592050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.592091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.592244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.592275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.592390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.592421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.592582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.592609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.592767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.592797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.592913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.592943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.593124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.593154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.593286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.593313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.593447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.593474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.593618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.593648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.593796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.593826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.593992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.594025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.594186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.594231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.594402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.594432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.594590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.594619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.594753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.594779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.594885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.594911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.595090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.595118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.595229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.595256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.595423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.595450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.595573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.595599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.595761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.595792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.595912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.595944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.596111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.596139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.596248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.596275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.596412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.596442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.596560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.973 [2024-10-01 01:53:44.596590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.973 qpair failed and we were unable to recover it. 00:36:04.973 [2024-10-01 01:53:44.596729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.596756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.596910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.596937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.597127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.597155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.597270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.597314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.597467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.597494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.597597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.597624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.597819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.597849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.598008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.598039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.598191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.598222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.598339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.598366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.598502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.598530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.598668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.598695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.598872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.598899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.599015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.599044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.599208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.599252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.599402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.599432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.599559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.599586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.599792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.599822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.600008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.600036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.600178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.600221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.600382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.600409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.600539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.600566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.600782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.600809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.600926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.600953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.601086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.601115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.601251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.601296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.601452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.601480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.601616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.601643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.601818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.601845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.601974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.602027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.602180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.602209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.602334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.602377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.602514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.602541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.602645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.602672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.602807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.602834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.603003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.603035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.974 qpair failed and we were unable to recover it. 00:36:04.974 [2024-10-01 01:53:44.603173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.974 [2024-10-01 01:53:44.603200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.603318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.603345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.603489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.603517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.603691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.603720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.603855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.603883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.604008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.604036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.604226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.604256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.604410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.604440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.604566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.604593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.604727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.604754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.604883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.604913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.605073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.605291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.605428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.605568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.605702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.605841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.605978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.606146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.606308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.606456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.606599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.606791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.606931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.606959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.607136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.607164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.607298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.607325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.607436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.607464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.607597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.607623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.607795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.607822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.607973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.608012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.608193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.608222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.608371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.608401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.608570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.608597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.608747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.608776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.608937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.608967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.609097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.609127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.609249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.975 [2024-10-01 01:53:44.609276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.975 qpair failed and we were unable to recover it. 00:36:04.975 [2024-10-01 01:53:44.609417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.609444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.609608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.609638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.609773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.609803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.609975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.610013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.610142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.610169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.610331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.610361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.610512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.610542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.610700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.610728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.610843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.610871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.611010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.611038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.611211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.611238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.611403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.611429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.611541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.611568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.611704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.611731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.611895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.611942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.612081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.612112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.612237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.612264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.612418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.612447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.612626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.612656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.612816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.612843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.613046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.613210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.613364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.613515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.613658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.613844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.613978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.614013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.614124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.614151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.614287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.614315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.614461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.614491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.614634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.614664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.614791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.614818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.614983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.976 [2024-10-01 01:53:44.615035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.976 qpair failed and we were unable to recover it. 00:36:04.976 [2024-10-01 01:53:44.615148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.615178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.615312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.615342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.615502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.615529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.615670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.615696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.615868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.615895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.616059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.616086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.616236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.616263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.616404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.616431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.616583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.616613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.616773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.616803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.616962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.616989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.617132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.617159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.617283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.617313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.617478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.617505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.617611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.617638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.617747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.617773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.617890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.617917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.618062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.618093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.618252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.618279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.618383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.618410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.618541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.618570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.618722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.618753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.618889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.618921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.619084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.619129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.619247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.619276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.619390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.619433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.619567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.619594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.619736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.619780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.619925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.619955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.620118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.620149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.620319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.620346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.620458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.620485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.620592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.620618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.620761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.620788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.620897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.620925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.621025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.621051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.621189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.977 [2024-10-01 01:53:44.621216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.977 qpair failed and we were unable to recover it. 00:36:04.977 [2024-10-01 01:53:44.621325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.621351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.621487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.621514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.621653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.621680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.621814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.621841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.621975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.622008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.622190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.622217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.622353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.622380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.622531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.622558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.622709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.622736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.622923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.622949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.623066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.623112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.623228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.623257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.623436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.623477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.623660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.623688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.623819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.623858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.624012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.624045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.624270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.624299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.624462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.624491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.624656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.624686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.624842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.624879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.625055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.625087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.625240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.625267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.625423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.625458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.625614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.625645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.625791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.625821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.626018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.626054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.626168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.626195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.626320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.626348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.626488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.626516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.626652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.626680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.626848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.626879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.627035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.627069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.627265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.627294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.627447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.627474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.627619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.627650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.627790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.627836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.628027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.978 [2024-10-01 01:53:44.628056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.978 qpair failed and we were unable to recover it. 00:36:04.978 [2024-10-01 01:53:44.628194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.628221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.628332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.628359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.628514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.628545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.628706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.628737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.628872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.628901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.629048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.629076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.629229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.629258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.629409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.629440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.629577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.629610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.629739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.629765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.629928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.629959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.630122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.630153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.630325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.630352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.630467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.630495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.630661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.630691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.630845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.630890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.631057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.631087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.631209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.631237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.631439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.631469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.631587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.631616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.631803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.631830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.631991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.632030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.632168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.632198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.632363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.632390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.632522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.632548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.632679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.632723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.632848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.632878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.633957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.633985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.634147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.634194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.634323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.634353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.634507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.634534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.979 qpair failed and we were unable to recover it. 00:36:04.979 [2024-10-01 01:53:44.634659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.979 [2024-10-01 01:53:44.634702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.634880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.634910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.635066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.635096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.635226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.635253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.635373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.635401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.635576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.635605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.635732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.635762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.635898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.635926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.636039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.636065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.636208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.636235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.636359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.636389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.636521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.636548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.636714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.636741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.636902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.636932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.637068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.637096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.637258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.637285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.637399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.637445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.637591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.637621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.637782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.637816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.637984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.638021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.638174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.638206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.638386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.638415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.638618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.638670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.638834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.638861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.638978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.639015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.639197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.639229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.639407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.639437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.639621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.639653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.639778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.639808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.639955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.639985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.640129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.640162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.640328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.640361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.640481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.640509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.640645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.640673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.640805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.640856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.980 qpair failed and we were unable to recover it. 00:36:04.980 [2024-10-01 01:53:44.641010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.980 [2024-10-01 01:53:44.641038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.641147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.641176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.641351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.641395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.641545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.641577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.641731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.641759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.641877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.641905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.642064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.642095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.642238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.642272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.642441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.642468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.642572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.642601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.642777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.642808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.642959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.642988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.643160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.643188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.643316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.643345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.643476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.643504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.643671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.643702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.643870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.643897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.644062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.644094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.644244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.644271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.644436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.644463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.644634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.644663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.644806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.644833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.645008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.645040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.645174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.645206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.645401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.645428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.645567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.645594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.645743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.645789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.645970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.646021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.646219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.646248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.646429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.646459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.646581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.646611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.646735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.646776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.646931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.646958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.647131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.647176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.647327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.647358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-10-01 01:53:44.647537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-10-01 01:53:44.647567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.647725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.647758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.647947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.647978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.648118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.648150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.648278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.648309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.648462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.648489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.648655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.648699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.648848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.648883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.649046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.649077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.649237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.649266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.649383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.649417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.649532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.649558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.649741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.649771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.649939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.649966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.650140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.650173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.650371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.650403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.650555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.650594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.650734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.650760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.650874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.650901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.651020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.651047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.651166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.651205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.651344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.651371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.651515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.651542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.651688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.651716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.651858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.651885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.652112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.652145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.652287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.652330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.652469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.652497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.652678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.652723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.652867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.652896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.653016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.653046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.653173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.653201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.653408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.653439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.653584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.653621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.653746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.653773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.653969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.654017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.654183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.654214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.654379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-10-01 01:53:44.654417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-10-01 01:53:44.654529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.654555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.654670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.654698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.654886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.654915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.655070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.655114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.655253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.655281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.655416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.655442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.655577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.655607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.655765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.655792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.655958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.656011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.656155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.656185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.656338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.656368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.656553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.656580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.656739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.656769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.656923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.656953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.657106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.657136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.657305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.657332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.657466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.657495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.657615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.657642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.657771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.657803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.657942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.657969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.658117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.658161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.658321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.658348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.658486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.658514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.658703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.658730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.658844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.658887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.659062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.659092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.659273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.659300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.659466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.659493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.659626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.659655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.659832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.659862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.660032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.660079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.660259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.660289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.660408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.660437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.660619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.660648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.660800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.660848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.660995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.661030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.661147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-10-01 01:53:44.661178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-10-01 01:53:44.661329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.661359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.661503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.661533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.661693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.661720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.661866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.661914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.662096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.662130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.662311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.662346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.662501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.662539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.662655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.662701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.662849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.662887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.663043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.663074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.663237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.663267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.663406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.663454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.663600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.663628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.663767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.663795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.663928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.663963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.664125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.664154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.664260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.664289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.664429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.664458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.664656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.664684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.664805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.664833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.665003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.665049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.665227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.665259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.665415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.665447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.665610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.665641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.665834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.665862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.666927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.666961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.667144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.667174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.667336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.667372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.667516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.667547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.667659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.667688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-10-01 01:53:44.667825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-10-01 01:53:44.667853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.668047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.668082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.668215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.668254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.668405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.668433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.668571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.668603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.668732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.668764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.668919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.668946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.669065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.669098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.669253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.669282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.669390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.669418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.669581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.669609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.669767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.669799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.669958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.669989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.670129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.670171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.670365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.670394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.670538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.670566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.670701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.670730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.670863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.670918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.671070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.671100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.671241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.671269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.671446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.671473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.671614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.671641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.671775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.671803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.671942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.671969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.672098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.672125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.672315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.672345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.672476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.672503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.672651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.672694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.672869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.672899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.673022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.673066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.673206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.673233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.673369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.673397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.673531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.673574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-10-01 01:53:44.673695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-10-01 01:53:44.673739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.673875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.673903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.674045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.674091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.674241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.674271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.674424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.674460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.674588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.674615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.674721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.674748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.674907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.674936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.675070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.675101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.675229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.675256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.675397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.675424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.675563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.675593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.675720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.675747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.675890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.675917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.676049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.676077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.676242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.676272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.676416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.676445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.676588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.676618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.676741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.676772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.676928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.676958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.677150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.677178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.677282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.677309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.677440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.677487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.677637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.677667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.677811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.677841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.678020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.678048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.678208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.678237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.678370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.678400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.678549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.678579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.678737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.678764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.678899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.678926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.679100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.679128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.679246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.679274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.679381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.679408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.679526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.679553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.679682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.679713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.679864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.679894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-10-01 01:53:44.680027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-10-01 01:53:44.680055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.680219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.680264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.680390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.680420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.680565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.680595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.680724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.680752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.680866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.680893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.681081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.681112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.681234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.681271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.681462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.681489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.681601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.681629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.681755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.681782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.681911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.681942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.682109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.682137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.682300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.682330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.682515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.682543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.682687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.682731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.682870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.682897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.683016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.683044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.683198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.683229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.683349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.683379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.683540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.683568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.683712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.683766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.683882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.683926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.684090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.684118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.684271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.684298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.684403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.684430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.684635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.684662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.684770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.684797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.684932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.684960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.685075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.685102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.685253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.685280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.685424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.685453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.685652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.685680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.685848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.685877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.686033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.686083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.686225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.686252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.686403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.686430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.686574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-10-01 01:53:44.686601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-10-01 01:53:44.686737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.686764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.686901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.686931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.687071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.687099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.687204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.687232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.687370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.687399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.687575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.687605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.687738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.687767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.687910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.687937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.688112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.688143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.688316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.688351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.688498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.688524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.688668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.688695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.688827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.688870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.689029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.689073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.689198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.689225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.689334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.689363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.689468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.689496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.689679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.689709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.689864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.689891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.690006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.690050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.690206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.690236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.690379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.690409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.690545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.690574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.690717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.690744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.690878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.690905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.691070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.691101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.691227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.691254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.691372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.691399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.691537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.691564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.691751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.691782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.691968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.692013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.692140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.692170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.692319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.692346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.692508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.692552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.692736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.692763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.692876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.692920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.693072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-10-01 01:53:44.693103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-10-01 01:53:44.693256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.693287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.693421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.693448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.693560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.693587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.693761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.693790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.693942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.693972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.694138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.694166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.694325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.694355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.694545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.694572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.694709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.694736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.694867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.694894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.695968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.695996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.696112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.696141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.696308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.696335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.696476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.696506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.696625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.696655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.696799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.696829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.697018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.697045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.697159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.697185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.697357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.697384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.697486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.697511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.697629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.697657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.697821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.697852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.698964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.698991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.699184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.699211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.699378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-10-01 01:53:44.699405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-10-01 01:53:44.699519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.699548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.699682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.699709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.699853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.699896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.700055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.700083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.700212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.700240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.700378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.700422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.700569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.700599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.700747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.700777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.700909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.700935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.701050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.701077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.701240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.701283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.701426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.701456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.701587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.701614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.701761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.701788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.701977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.702011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.702176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.702226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.702387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.702413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.702526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.702553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.702698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.702725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.702887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.702918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.703066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.703094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.703198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.703223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.703400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.703430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.703579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.703609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.703788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.703814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.703987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.704024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.704191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.704218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.704411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.704438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.704576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.704603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.704762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.704792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.704907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.704951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.705119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.705146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.705294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.705321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-10-01 01:53:44.705461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-10-01 01:53:44.705488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.705645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.705675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.705785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.705815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.705992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.706026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.706133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.706161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.706299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.706325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.706488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.706518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.706671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.706697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.706801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.706828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.706968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.707162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.707292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.707454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.707593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.707768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.707904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.707931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.708092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.708139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.708267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.708297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.708452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.708481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.708641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.708668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.708804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.708849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.708973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.709012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.709200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.709235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.709371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.709398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.709560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.709604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.709778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.709805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.709910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.709937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.710076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.710103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.710240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.710267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.710406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.710436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.710608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.710638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.710785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.710812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.710916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.710943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.711106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.711134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.711301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.711329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.711444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.711471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.711640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.711668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.711806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.711837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.712012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.712043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.712181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.712208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.712354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-10-01 01:53:44.712398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-10-01 01:53:44.712516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.712546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.712693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.712724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.712887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.712914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.713074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.713102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.713273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.713302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.713470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.713499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.713683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.713710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.713875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.713905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.714091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.714121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.714279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.714306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.714467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.714494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.714652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.714681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.714833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.714863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.715040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.715202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.715334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.715502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.715703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.715881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.715994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.716058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.716180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.716210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.716386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.716423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.716613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.716640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.716747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.716791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.716938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.716967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.717118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.717148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.717303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.717331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.717468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.717514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.717659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.717689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.717807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.717838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.717976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.718009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.718161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.718190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.718318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.718345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.718507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.718537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.718701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.718727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.718845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.718871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.719015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.719062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.719187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.719215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.719354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.719380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.719489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-10-01 01:53:44.719516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-10-01 01:53:44.719686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.719717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.719867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.719896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.720049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.720077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.720215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.720242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.720408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.720438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.720553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.720596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.720761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.720788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.720902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.720929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.721072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.721100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.721262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.721293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.721460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.721486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.721629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.721657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.721841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.721870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.721985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.722169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.722307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.722446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.722638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.722800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.722957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.722984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.723122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.723153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.723310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.723346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.723514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.723541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.723657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.723683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.723825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.723853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.724008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.724039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.724209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.724236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.724397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.724426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.724546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.724577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.724734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.724764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.724922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.724950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.725101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.725146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.725304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.725333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.725479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.725509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.725710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.725738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.725922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.725953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.726172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.726200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.726326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.726354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.726471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.726497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.726604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.726631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-10-01 01:53:44.726767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-10-01 01:53:44.726794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.726958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.726989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.727165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.727192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.727316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.727359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.727512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.727541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.727692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.727721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.727880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.727906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.728051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.728079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.728199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.728226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.728362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.728392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.728543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.728570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.728703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.728729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.728928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.728957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.729099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.729130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.729268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.729294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.729405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.729432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.729595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.729638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.729815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.729845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.730042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.730070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.730207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.730236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.730398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.730427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.730602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.730636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.730791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.730818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.730967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.730993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.731133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.731176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.731312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.731359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.731522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.731550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.731689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.731716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.731852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.731880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.732025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.732055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.732194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.732220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.732367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.732394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.732541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.732571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.732746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.732776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.732961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.732988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.733140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-10-01 01:53:44.733170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-10-01 01:53:44.733364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.733394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.733577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.733606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.733779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.733806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.733920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.733965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.734140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.734167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.734349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.734379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.734547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.734575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.734706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.734750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.734936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.734966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.735151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.735180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.735355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.735382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.735525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.735553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.735746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.735777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.735966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.736011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.736204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.736230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.736336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.736364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.736481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.736509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.736690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.736720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.736874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.736904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.737072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.737100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.737244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.737279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.737439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.737469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.737647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.737675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.737814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.737841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.737977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.738012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.738191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.738226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.738370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.738397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.738539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.738566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.738758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.738788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.738975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-10-01 01:53:44.739011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-10-01 01:53:44.739167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.739193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.739341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.739385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.739543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.739571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.739737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.739780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.739944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.739971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.740135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.740165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.740323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.740350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.740459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.740487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.740587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.740615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.740751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.740778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.740926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.740970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.741163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.741208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.741348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.741375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.741517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.741545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.741730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.741760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.741883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.741913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.742105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.742133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.742243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.742292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.742442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.742472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.742598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.742627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.742783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.742811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.742938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.742982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.743145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.743174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.743324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.743354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.743479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.743507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.743652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.743678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.743843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.743888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.744031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.744061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.744254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.744281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.744395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.744440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.744585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.744615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.744788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.744818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.744948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.745057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.745244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.745282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.745425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.745468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.745583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.745631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.745751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.745779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.745924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.745951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.746120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.746147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.746282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.746326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.746462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.746490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.746605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.746632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.746822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.746849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.746981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.747018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.747153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.747180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.747326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.747370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.747527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.747558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.747743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.747774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.747935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.747962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.748103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.748131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.748271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.748309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.748449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.748476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-10-01 01:53:44.748614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-10-01 01:53:44.748641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.748797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.748827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.748978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.749019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.749193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.749219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.749331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.749358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.749522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.749549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.749711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.749741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.749885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.749912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.750086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.750118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.750234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.750260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.750440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.750470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.750657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.750687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.750845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.750872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.751024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.751066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.751212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.751241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.751396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.751426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.751610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.751637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.751792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.751822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.751949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.751980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.752177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.752206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.752369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.752396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.752526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.752570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.752716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.752746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.752899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.752937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.753110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.753137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.753247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.753275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.753462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.753492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.753668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.753699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.753822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.753850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.753962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.753990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.754119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.754147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.754289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.754316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.754494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.754522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.754660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.754688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.754882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.754912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.755098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.755128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.755283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.755311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.755480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.755524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.755674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.755704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.755880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.755910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.756039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.756067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.756211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.756254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.756410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.756440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.756588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.756619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.756768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.756795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.756934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.756962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.757122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.757152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.757328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.757355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.757517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.757545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.757647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.757692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.757844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.757880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.758029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.758060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.758212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.997 [2024-10-01 01:53:44.758239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.997 qpair failed and we were unable to recover it. 00:36:04.997 [2024-10-01 01:53:44.758352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.758379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.758544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.758575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.758751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.758781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.758905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.758949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.759119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.759146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.759246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.759291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.759455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.759483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.759647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.759675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.759779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.759824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.760014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.760055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.760208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.760238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.760437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.760464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.760613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.760643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.760806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.760833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.760939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.760967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.761138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.761165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.761330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.761383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.761557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.761587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.761730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.761760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.761918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.761945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.762135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.762165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.762346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.762377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.762531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.762558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.762687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.762714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.762899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.762929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.763108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.763135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.763268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.763296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.763432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.763460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.763625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.763652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.763829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.763856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.763962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.763989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.764181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.764207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.764321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.764348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.764479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.764506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.764656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.764685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.764819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.764847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.764982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.765016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.765166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.765215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.765366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.765397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.765552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.765579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.765681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.765709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.765865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.765895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.766015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.766060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.766221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.766248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.766444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.766497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.766680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.998 [2024-10-01 01:53:44.766709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.998 qpair failed and we were unable to recover it. 00:36:04.998 [2024-10-01 01:53:44.766850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.766878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.766990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.767025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.767197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.767223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.767330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.767357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.767495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.767525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.767656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.767683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.767827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.767855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.767983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.768034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.768164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.768194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.768352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.768379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.768492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.768520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.768678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.768708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.768838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.768869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.769022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.769050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.769233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.769264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.769414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.769445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.769622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.769652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.769839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.769867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.770029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.770061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.770206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.770233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.770366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.770393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.770554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.770582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.770734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.770765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.770949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.770976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.771163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.771193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.771386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.771414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.771600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.771630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.771775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.771805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.771989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.772024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.772163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.772190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.772358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.772423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.772561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.772597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.772711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.772741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.772879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.772924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.773061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.773090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.773251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.773294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:04.999 qpair failed and we were unable to recover it. 00:36:04.999 [2024-10-01 01:53:44.773436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.999 [2024-10-01 01:53:44.773466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.773606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.773633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.773791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.773820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.773946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.773977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.774146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.774176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.774341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.774368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.774479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.774506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.774660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.774707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.774872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.774899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.775082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.775110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.775243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.775288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.775437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.775467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.775621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.775652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.775786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.775813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.775924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.775952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.776067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.776094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.776233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.776261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.776423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.776451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.776636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.776666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.776816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.776846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.777039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.777067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.777185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.777212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.777378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.777426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.777610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.777640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.777782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.777810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.777985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.778020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.778186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.778216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.778377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.778406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.778547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.778574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.778739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.778766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.778898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.778943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.779095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.779126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.779277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.779308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.779466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.294 [2024-10-01 01:53:44.779493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.294 qpair failed and we were unable to recover it. 00:36:05.294 [2024-10-01 01:53:44.779651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.779695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.779813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.779849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.780023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.780053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.780204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.780231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.780347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.780374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.780507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.780534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.780679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.780709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.780826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.780870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.781012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.781057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.781201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.781228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.781369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.781396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.781536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.781563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.781682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.781726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.781872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.781903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.782033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.782065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.782231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.782259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.782373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.782402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.782536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.782566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.782735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.782763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.782940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.782967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.783119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.783150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.783301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.783336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.783490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.783521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.783679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.783706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.783817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.783845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.783980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.784022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.784208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.784236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.784373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.784400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.784562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.784593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.784743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.784772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.784931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.784961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.785129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.785158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.785293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.785320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.785461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.785488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.785669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.785697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.785860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.785887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.786008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.295 [2024-10-01 01:53:44.786037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.295 qpair failed and we were unable to recover it. 00:36:05.295 [2024-10-01 01:53:44.786199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.786226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.786384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.786414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.786570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.786597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.786706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.786733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.786924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.786956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.787092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.787138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.787299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.787326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.787468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.787495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.787658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.787702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.787828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.787859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.788030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.788059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.788234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.788262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.788455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.788485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.788643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.788672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.788777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.788804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.788944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.788972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.789123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.789151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.789313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.789341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.789487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.789515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.789680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.789724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.789878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.789908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.790086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.790117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.790247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.790275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.790437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.790482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.790625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.790654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.790772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.790802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.790985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.791123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.791289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.791468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.791595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.791741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.791921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.791951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.792106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.792137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.792291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.792319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.792464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.792491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.792622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.296 [2024-10-01 01:53:44.792649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.296 qpair failed and we were unable to recover it. 00:36:05.296 [2024-10-01 01:53:44.792787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.792815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.792981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.793015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.793192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.793223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.793335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.793366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.793491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.793522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.793672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.793699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.793844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.793872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.794034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.794066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.794175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.794203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.794320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.794347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.794531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.794561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.794706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.794736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.794913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.794944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.795107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.795136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.795301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.795345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.795463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.795493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.795634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.795662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.795798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.795829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.796017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.796063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.796195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.796223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.796361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.796391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.796525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.796552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.796716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.796760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.796906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.796936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.797086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.797118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.797273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.797300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.797436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.797479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.797621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.797651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.797764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.797795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.797948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.797975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.798095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.798123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.798320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.798348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.798454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.798481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.798617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.798645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.798787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.798815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.798979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.799032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.799155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.799186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.297 qpair failed and we were unable to recover it. 00:36:05.297 [2024-10-01 01:53:44.799348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.297 [2024-10-01 01:53:44.799375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.799556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.799586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.799765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.799795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.799954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.799984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.800178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.800206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.800324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.800352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.800511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.800539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.800708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.800739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.800898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.800926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.801105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.801135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.801282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.801317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.801445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.801476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.801624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.801651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.801791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.801818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.801923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.801950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.802078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.802109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.802275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.802302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.802446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.802474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.802627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.802657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.802806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.802837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.802995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.803037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.803172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.803200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.803314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.803341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.803491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.803521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.803688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.803716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.803857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.803884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.804011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.804060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.804226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.804254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.804428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.804455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.804566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.804594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.804731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.804759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.298 qpair failed and we were unable to recover it. 00:36:05.298 [2024-10-01 01:53:44.804891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.298 [2024-10-01 01:53:44.804922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.805079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.805107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.805288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.805318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.805503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.805534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.805664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.805694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.805877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.805904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.806056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.806084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.806223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.806268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.806419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.806449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.806608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.806635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.806752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.806797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.806952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.806979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.807117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.807145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.807283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.807311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.807449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.807493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.807606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.807652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.807769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.807800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.807932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.807959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.808072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.808100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.808205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.808237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.808369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.808399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.808525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.808553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.808720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.808748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.808929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.808957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.809094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.809122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.809266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.809294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.809436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.809463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.809618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.809649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.809796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.809826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.809957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.809984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.810131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.810159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.810265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.810310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.810489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.810519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.810685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.810714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.810851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.810879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.811054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.811086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.811215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.811243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.299 [2024-10-01 01:53:44.811356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.299 [2024-10-01 01:53:44.811384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.299 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.811523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.811551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.811700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.811730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.811882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.811912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.812070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.812099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.812233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.812260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.812398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.812426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.812558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.812586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.812762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.812789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.812931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.812959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.813151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.813180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.813342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.813369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.813502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.813529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.813684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.813714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.813872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.813899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.814959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.814987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.815151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.815186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.815362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.815392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.815553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.815580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.815716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.815744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.815882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.815924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.816070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.816098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.816237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.816263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.816422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.816451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.816633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.816659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.816837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.816867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.816989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.817023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.817133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.817161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.817326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.817357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.817505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.817536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.817663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.817690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-10-01 01:53:44.817797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.300 [2024-10-01 01:53:44.817825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.817960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.817987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.818180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.818210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.818348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.818375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.818505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.818532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.818711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.818738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.818856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.818883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.819024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.819053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.819196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.819241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.819399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.819427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.819564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.819591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.819730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.819757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.819878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.819922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.820067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.820097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.820237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.820267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.820427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.820455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.820565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.820592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.820733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.820760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.820875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.820905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.821042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.821071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.821214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.821242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.821378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.821405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.821607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.821634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.821764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.821792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.821893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.821920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.822073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.822105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.822251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.822278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.822420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.822448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.822559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.822604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.822774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.822802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.822938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.822965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.823147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.823174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.823360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.823425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.823616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.823645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.823807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.823835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.824004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.824032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.824195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.824224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.824389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.301 [2024-10-01 01:53:44.824417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-10-01 01:53:44.824529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.824556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.824736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.824763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.824880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.824907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.825047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.825093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.825201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.825229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.825345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.825373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.825486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.825514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.825702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.825732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.825860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.825887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.826051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.826079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.826235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.826265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.826414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.826445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.826556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.826587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.826707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.826735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.826866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979260 is same with the state(6) to be set 00:36:05.302 [2024-10-01 01:53:44.827121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.827163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.827286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.827334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.827486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.827513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.827733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.827785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.827935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.827965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.828118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.828146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.828275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.828301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.828467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.828495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.828628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.828656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.828817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.828848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.829086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.829114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.829249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.829275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.829411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.829438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.829580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.829607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.829771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.829797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.829931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.829958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.830092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.830119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.830262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.830288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.830442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.830472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.830640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.830667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.830829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.830855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.830968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.830995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.831135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.302 [2024-10-01 01:53:44.831161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-10-01 01:53:44.831278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.831304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.831415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.831444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.831633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.831663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.831818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.831845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.832016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.832063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.832227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.832261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.832415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.832444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.832556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.832585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.832724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.832751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.832942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.832969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.833111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.833139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.833286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.833316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.833453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.833479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.833618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.833645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.833782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.833809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.833949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.833979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.834143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.834171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.834311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.834339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.834488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.834515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.834676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.834703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.834879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.834906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.835072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.835099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.835259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.835288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.835429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.835460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.835623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.835650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.835832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.835861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.836036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.836067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.836194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.836222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.836358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.836384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.836528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.836557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.836707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.836734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.836840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.836867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.303 [2024-10-01 01:53:44.837042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.303 [2024-10-01 01:53:44.837069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.303 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.837209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.837236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.837374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.837401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.837585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.837615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.837771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.837798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.837916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.837943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.838067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.838105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.838218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.838246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.838386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.838430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.838576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.838606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.838767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.838793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.838923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.838968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.839113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.839146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.839261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.839289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.839405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.839432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.839571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.839598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.839764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.839791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.839947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.839977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.840137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.840164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.840299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.840326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.840508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.840537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.840662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.840693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.840852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.840879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.840985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.841021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.841164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.841192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.841321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.841348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.841515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.841545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.841705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.841733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.841848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.841875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.841973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.842007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.842145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.842172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.842283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.842310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.842444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.842487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.842635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.842665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.842828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.842855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.843013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.843055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.843202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.843231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.843396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.304 [2024-10-01 01:53:44.843424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.304 qpair failed and we were unable to recover it. 00:36:05.304 [2024-10-01 01:53:44.843562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.843589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.843750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.843784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.843961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.843988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.844141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.844170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.844304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.844335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.844489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.844517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.844653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.844695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.844883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.844912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.845101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.845129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.845242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.845268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.845405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.845432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.845568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.845596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.845702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.845729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.845870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.845900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.846063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.846092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.846237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.846265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.846402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.846429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.846606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.846633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.846775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.846802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.846941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.846984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.847153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.847180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.847383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.847428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.847589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.847621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.847780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.847808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.847949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.847976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.848130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.848171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.848344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.848374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.848593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.848650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.848886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.848941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.849124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.849151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.849334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.849364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.849617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.849671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.849851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.849878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.850069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.850113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.850231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.850258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.850392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.850420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.850559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.850587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.305 [2024-10-01 01:53:44.850750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.305 [2024-10-01 01:53:44.850777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.305 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.850932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.850962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.851108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.851149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.851299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.851329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.851498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.851526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.851757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.851815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.851937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.851968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.852127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.852156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.852347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.852376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.852563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.852591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.852724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.852751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.852880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.852908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.853046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.853074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.853190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.853217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.853357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.853402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.853516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.853545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.853706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.853733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.853873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.853901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.854042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.854074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.854186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.854213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.854346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.854374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.854533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.854563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.854696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.854723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.854890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.854917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.855074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.855101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.855262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.855289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.855448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.855499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.855729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.855781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.855969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.856008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.856143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.856171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.856298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.856328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.856497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.856523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.856635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.856678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.856830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.856861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.856989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.857024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.857156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.857184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.857287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.857329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.857483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.306 [2024-10-01 01:53:44.857510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.306 qpair failed and we were unable to recover it. 00:36:05.306 [2024-10-01 01:53:44.857646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.857690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.857867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.857898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.858025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.858052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.858164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.858191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.858327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.858355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.858462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.858489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.858627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.858654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.858862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.858894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.859027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.859055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.859195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.859222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.859389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.859433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.859574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.859602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.859711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.859738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.859914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.859941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.860115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.860143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.860251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.860278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.860410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.860437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.860604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.860632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.860811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.860840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.861014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.861044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.861186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.861213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.861383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.861410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.861562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.861592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.861774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.861801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.862017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.862059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.862184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.862215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.862380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.862408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.862538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.862566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.862679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.862707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.862852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.862897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.863093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.863121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.863259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.863303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.863430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.863457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.863598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.863625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.863757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.863789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.863894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.863921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.864064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.864095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.307 [2024-10-01 01:53:44.864273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.307 [2024-10-01 01:53:44.864301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.307 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.864463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.864489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.864670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.864699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.864851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.864880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.865022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.865051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.865195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.865223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.865363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.865390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.865528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.865555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.865694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.865722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.865892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.865922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.866075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.866102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.866245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.866273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.866394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.866423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.866552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.866579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.866718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.866746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.866911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.866942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.867110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.867138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.867275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.867303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.867460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.867491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.867616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.867643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.867783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.867810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.867958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.867988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.868121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.868149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.868308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.868335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.868495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.868530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.868692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.868719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.868855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.868898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.869064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.869092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.869207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.869234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.869346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.869374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.869515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.869542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.869694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.869721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.869859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.869886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.308 [2024-10-01 01:53:44.870074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.308 [2024-10-01 01:53:44.870102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.308 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.870237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.870264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.870378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.870420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.870577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.870604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.870732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.870760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.870934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.870975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.871130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.871160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.871329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.871356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.871510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.871540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.871691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.871722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.871876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.871904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.872048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.872077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.872218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.872262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.872440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.872467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.872628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.872691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.872867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.872897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.873048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.873076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.873213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.873256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.873376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.873406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.873611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.873638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.873800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.873830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.873984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.874022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.874167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.874194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.874328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.874356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.874522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.874567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.874719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.874747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.874893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.874936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.875070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.875098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.875213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.875240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.875376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.875404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.875573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.875600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.875707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.875734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.875935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.875981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.876180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.876213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.876374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.876403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.876607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.876660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.876810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.876841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.877029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.309 [2024-10-01 01:53:44.877057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.309 qpair failed and we were unable to recover it. 00:36:05.309 [2024-10-01 01:53:44.877214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.877245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.877415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.877444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.877610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.877638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.877792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.877822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.878004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.878035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.878198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.878226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.878366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.878393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.878525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.878557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.878723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.878751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.878877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.878907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.879050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.879081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.879241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.879269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.879451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.879481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.879660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.879689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.879845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.879872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.880014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.880042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.880212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.880239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.880372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.880399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.880563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.880609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.880731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.880762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.880976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.881013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.881169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.881196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.881382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.881413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.881542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.881571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.881680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.881707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.881879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.881905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.882040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.882069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.882184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.882226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.882380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.882411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.882542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.882569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.882698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.882724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.882910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.882941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.883105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.883134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.883297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.883343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.883513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.883558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.883754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.883783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.883937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.310 [2024-10-01 01:53:44.883968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.310 qpair failed and we were unable to recover it. 00:36:05.310 [2024-10-01 01:53:44.884133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.884164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.884293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.884321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.884497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.884542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.884718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.884770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.884930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.884958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.885076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.885105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.885267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.885295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.885444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.885471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.885582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.885608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.885768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.885795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.885930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.885963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.886073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.886101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.886263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.886308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.886471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.886500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.886607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.886635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.886770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.886798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.886928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.886959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.887125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.887153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.887285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.887313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.887483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.887510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.887650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.887678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.887871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.887899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.888065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.888094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.888251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.888281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.888442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.888469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.888610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.888638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.888803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.888830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.889017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.889048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.889236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.889264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.889444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.889474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.889589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.889619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.889750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.889778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.889911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.889939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.890103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.890133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.890286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.890313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.890449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.311 [2024-10-01 01:53:44.890494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.311 qpair failed and we were unable to recover it. 00:36:05.311 [2024-10-01 01:53:44.890670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.890700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.890838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.890866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.891013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.891041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.891207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.891237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.891422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.891450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.891562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.891607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.891725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.891756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.891918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.891945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.892061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.892089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.892232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.892260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.892432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.892460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.892640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.892669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.892825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.892856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.893015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.893043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.893180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.893212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.893382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.893411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.893542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.893570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.893721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.893751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.893901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.893931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.894085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.894114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.894251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.894296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.894447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.894477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.894606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.894634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.894776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.894803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.894958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.894988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.895142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.895169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.895306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.895333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.895523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.895553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.895701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.895729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.895890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.895938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.896074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.896103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.896266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.896294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.312 [2024-10-01 01:53:44.896443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.312 [2024-10-01 01:53:44.896473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.312 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.896651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.896681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.896833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.896860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.897004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.897049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.897228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.897258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.897392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.897420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.897534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.897561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.897717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.897748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.897931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.897958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.898119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.898147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.898274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.898302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.898432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.898459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.898565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.898593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.898735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.898766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.898933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.898963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.899134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.899162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.899304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.899332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.899470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.899498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.899645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.899675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.899839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.899867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.899981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.900017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.900152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.900180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.900316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.900346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.900514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.900542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.900678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.900722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.900879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.900910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.901080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.901108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.901250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.901278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.901407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.901435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.901629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.901655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.901791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.901819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.901959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.901988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.902196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.902223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.902336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.902364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.902530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.902574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.902718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.902745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.902921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.902965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.313 [2024-10-01 01:53:44.903116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.313 [2024-10-01 01:53:44.903147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.313 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.903307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.903335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.903474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.903501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.903688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.903719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.903853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.903880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.904031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.904077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.904255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.904286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.904422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.904449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.904565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.904592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.904741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.904768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.905003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.905049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.905186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.905213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.905349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.905384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.905514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.905542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.905689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.905716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.905907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.905937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.906098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.906126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.906267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.906313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.906470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.906500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.906665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.906692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.906853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.906880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.907070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.907101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.907252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.907280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.907412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.907457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.907621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.907649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.907785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.907812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.907953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.907981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.908183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.908214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.908361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.908389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.908523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.908551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.908715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.908761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.908890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.908917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.909057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.909085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.909220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.909249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.909390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.909417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.909552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.909580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.909695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.909723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.909898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.314 [2024-10-01 01:53:44.909925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.314 qpair failed and we were unable to recover it. 00:36:05.314 [2024-10-01 01:53:44.910093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.910138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.910332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.910360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.910493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.910520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.910629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.910657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.910822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.910850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.911018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.911047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.911218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.911246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.911378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.911409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.911547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.911574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.911708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.911735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.911870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.911898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.912041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.912069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.912249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.912279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.912434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.912463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.912618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.912650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.912794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.912838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.913009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.913037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.913147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.913175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.913338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.913366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.913538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.913565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.913698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.913726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.913892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.913923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.914072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.914103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.914267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.914294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.914400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.914429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.914565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.914610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.914744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.914771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.914883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.914911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.915083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.915111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.915283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.915311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.915494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.915525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.915699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.915729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.915862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.915906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.916020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.916065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.916200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.916228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.916343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.916370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.916510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.315 [2024-10-01 01:53:44.916538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.315 qpair failed and we were unable to recover it. 00:36:05.315 [2024-10-01 01:53:44.916692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.916722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.916888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.916916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.917017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.917045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.917202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.917229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.917387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.917416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.917558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.917586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.917756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.917785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.917922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.917951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.918100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.918128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.918273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.918301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.918439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.918466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.918648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.918678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.918828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.918859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.919050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.919079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.919269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.919300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.919462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.919490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.919626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.919654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.919770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.919803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.919968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.920023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.920179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.920206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.920342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.920371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.920540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.920570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.920727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.920754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.920933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.920963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.921135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.921164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.921304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.921331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.921532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.921562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.921689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.921721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.921848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.921878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.922027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.922071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.922237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.922265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.922432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.922460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.922600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.922628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.922795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.922827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.923015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.923044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.923207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.923252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.923406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.316 [2024-10-01 01:53:44.923436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.316 qpair failed and we were unable to recover it. 00:36:05.316 [2024-10-01 01:53:44.923600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.923627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.923768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.923813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.923989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.924027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.924159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.924186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.924361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.924407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.924576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.924604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.924721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.924748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.924930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.924961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.925098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.925129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.925296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.925324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.925459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.925487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.925627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.925671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.925812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.925840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.925980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.926030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.926188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.926218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.926351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.926378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.926508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.926535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.926689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.926716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.926886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.926913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.927070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.927102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.927256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.927291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.927421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.927449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.927584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.927612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.927762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.927792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.927944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.927972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.928116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.928144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.928286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.928329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.928487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.928514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.928655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.928682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.928814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.928846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.929086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.929115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.929246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.929273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.929427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.317 [2024-10-01 01:53:44.929458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.317 qpair failed and we were unable to recover it. 00:36:05.317 [2024-10-01 01:53:44.929601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.929628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.929764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.929792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.929919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.929964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.930114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.930141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.930327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.930357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.930540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.930568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.930737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.930764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.930920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.930951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.931093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.931125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.931281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.931309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.931482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.931512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.931626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.931670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.931835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.931863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.932013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.932044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.932193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.932224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.932353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.932381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.932523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.932550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.932736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.932763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.932934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.932961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.933106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.933134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.933275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.933303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.933442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.933470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.933578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.933605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.933742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.933769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.933922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.933953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.934121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.934150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.934255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.934283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.934392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.934424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.934586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.934614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.934801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.934832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.935015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.935044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.935140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.935184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.935339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.935369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.935556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.935583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.935730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.935760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.935913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.935943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.936081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.936109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.318 qpair failed and we were unable to recover it. 00:36:05.318 [2024-10-01 01:53:44.936247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.318 [2024-10-01 01:53:44.936274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.936478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.936506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.936608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.936635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.936795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.936839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.936987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.937026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.937163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.937190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.937300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.937327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.937519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.937549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.937702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.937730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.937865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.937893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.938053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.938084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.938272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.938299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.938431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.938459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.938620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.938664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.938816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.938844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.938979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.939038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.939188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.939218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.939407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.939435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.939616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.939646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.939791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.939821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.939952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.939980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.940128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.940156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.940305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.940332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.940462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.940489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.940660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.940691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.940815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.940845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.941084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.941113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.941278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.941322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.941475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.941505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.941656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.941683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.941825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.941873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.942048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.942077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.942216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.942243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.942396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.942427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.942603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.942634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.942821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.942848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.943032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.943064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.319 [2024-10-01 01:53:44.943240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.319 [2024-10-01 01:53:44.943270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.319 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.943456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.943484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.943639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.943669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.943826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.943856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.944008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.944037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.944153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.944182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.944318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.944345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.944515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.944543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.944668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.944698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.944850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.944881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.945022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.945051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.945214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.945242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.945378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.945406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.945548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.945576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.945728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.945758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.945912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.945942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.946088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.946117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.946256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.946299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.946452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.946483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.946667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.946695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.946855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.946885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.947060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.947091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.947252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.947280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.947420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.947466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.947590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.947620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.947774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.947802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.947941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.947969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.948148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.948176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.948307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.948335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.948518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.948548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.948700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.948730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.948891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.948919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.949102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.949134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.949319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.949354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.949514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.949542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.949680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.949726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.949870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.949900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.950089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.320 [2024-10-01 01:53:44.950117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.320 qpair failed and we were unable to recover it. 00:36:05.320 [2024-10-01 01:53:44.950298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.950329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.950479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.950510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.950639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.950666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.950803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.950830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.951015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.951046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.951211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.951239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.951422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.951453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.951579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.951608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.951764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.951792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.951939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.951967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.952147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.952178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.952340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.952368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.952477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.952504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.952639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.952667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.952801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.952828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.952961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.952989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.953137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.953181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.953340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.953367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.953521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.953551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.953727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.953757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.953916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.953943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.954078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.954106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.954241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.954269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.954448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.954476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.954637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.954664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.954790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.954833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.955005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.955050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.955189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.955216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.955333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.955363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.955536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.955564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.955717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.955747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.955890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.955921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.956070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.956098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.956206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.956234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.956369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.956397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.956535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.956568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.321 [2024-10-01 01:53:44.956752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.321 [2024-10-01 01:53:44.956782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.321 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.956924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.956954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.957131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.957159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.957350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.957380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.957533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.957563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.957713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.957740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.957870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.957913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.958093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.958124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.958288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.958315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.958451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.958479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.958665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.958695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.958872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.958899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.959053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.959084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.959236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.959267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.959453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.959480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.959586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.959631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.959814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.959841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.959980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.960012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.960152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.960179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.960292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.960320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.960457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.960484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.960618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.960644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.960821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.960848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.961062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.961089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.961226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.961252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.961383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.961413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.961584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.961611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.961711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.961738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.961874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.961901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.962067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.962096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.962235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.962278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.962411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.962441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.962627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.962654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.322 [2024-10-01 01:53:44.962837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.322 [2024-10-01 01:53:44.962867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.322 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.963043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.963072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.963257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.963284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.963414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.963457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.963641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.963670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.963828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.963854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.963989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.964055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.964210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.964239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.964374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.964402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.964542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.964569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.964765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.964795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.964939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.964968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.965154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.965185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.965357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.965388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.965538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.965569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.965728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.965755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.965935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.965965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.966143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.966174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.966365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.966392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.966573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.966603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.966735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.966765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.966923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.966950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.967094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.967122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.967265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.967292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.967402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.967428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.967589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.967634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.967790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.967819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.967944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.967970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.968164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.968192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.968352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.968382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.968537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.968564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.968743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.968773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.968922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.968952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.969148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.969176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.969336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.969366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.969541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.969570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.969704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.969731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.323 [2024-10-01 01:53:44.969875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.323 [2024-10-01 01:53:44.969902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.323 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.970081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.970109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.970240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.970267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.970403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.970448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.970704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.970755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.970909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.970935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.971116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.971147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.971332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.971358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.971521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.971548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.971708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.971743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.971866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.971899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.972053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.972080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.972184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.972212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.972347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.972373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.972510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.972537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.972690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.972719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.972898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.972927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.973108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.973135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.973293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.973323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.973471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.973500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.973681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.973708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.973813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.973840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.973974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.974007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.974190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.974217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.974352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.974397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.974566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.974618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.974806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.974834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.974987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.975028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.975153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.975183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.975359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.975385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.975487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.975513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.975674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.975701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.975873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.975900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.976054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.976082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.976222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.976258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.976420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.976447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.976607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.976637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.976770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.324 [2024-10-01 01:53:44.976797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.324 qpair failed and we were unable to recover it. 00:36:05.324 [2024-10-01 01:53:44.976991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.977028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.977167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.977193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.977310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.977338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.977502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.977529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.977652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.977682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.977835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.977865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.978027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.978058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.978212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.978243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.978450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.978504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.978668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.978696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.978837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.978864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.979054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.979106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.979275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.979305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.979444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.979490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.979736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.979788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.979944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.979972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.980103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.980130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.980307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.980350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.980535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.980562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.980745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.980775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.980961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.980991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.981122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.981149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.981312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.981353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.981536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.981564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.981703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.981730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.981920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.981949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.982143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.982170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.982317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.982344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.982486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.982513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.982674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.982701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.982884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.982914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.983089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.983119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.983234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.983270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.983376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.983403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.983586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.983617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.983804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.983834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.983967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.325 [2024-10-01 01:53:44.983993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.325 qpair failed and we were unable to recover it. 00:36:05.325 [2024-10-01 01:53:44.984100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.984126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.984248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.984289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.984420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.984448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.984631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.984662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.984816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.984846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.985010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.985037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.985165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.985209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.985337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.985367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.985530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.985557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.985694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.985722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.985839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.985868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.986036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.986064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.986196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.986223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.986326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.986353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.986469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.986501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.986633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.986677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.986857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.986889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.987017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.987045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.987150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.987178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.987342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.987369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.987519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.987545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.987685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.987712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.987871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.987900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.988054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.988083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.988248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.988274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.988548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.988599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.988764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.988790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.988896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.988923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.989105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.989135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.989298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.989324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.989508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.989537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.989679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.989716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.989878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.326 [2024-10-01 01:53:44.989915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.326 qpair failed and we were unable to recover it. 00:36:05.326 [2024-10-01 01:53:44.990071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.990100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.990239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.990266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.990407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.990434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.990574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.990601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.990743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.990785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.990941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.990968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.991133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.991161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.991263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.991290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.991447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.991487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.991681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.991713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.991888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.991933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.992100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.992128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.992268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.992296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.992488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.992534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.992716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.992762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.992931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.992957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.993104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.993132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.993322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.993371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.993557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.993601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.993762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.993806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.993951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.993979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.994149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.994189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.994398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.994431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.994685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.994736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.994869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.994895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.995049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.995079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.995222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.995249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.995428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.995457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.995604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.995633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.995753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.995782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.995947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.995988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.996152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.996183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.996335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.996380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.996530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.996575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.996737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.996765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.996937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.996964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.997090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.997119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.327 qpair failed and we were unable to recover it. 00:36:05.327 [2024-10-01 01:53:44.997234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.327 [2024-10-01 01:53:44.997261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.997445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.997474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.997746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.997800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.997976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.998013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.998180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.998206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.998412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.998466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.998728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.998781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.998941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.998971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.999146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.999187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.999378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.999418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.999593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.999654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:44.999829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:44.999887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.000050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.000079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.000218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.000245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.000413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.000443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.000572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.000616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.000737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.000766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.000931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.000976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.001129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.001169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.001366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.001412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.001600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.001631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.001819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.001878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.001994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.002028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.002166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.002194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.002363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.002396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.002616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.002707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.002943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.002995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.003156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.003183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.003320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.003350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.003623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.003676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.003958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.004021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.004176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.004203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.004308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.004351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.004505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.004568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.004859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.004925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.005113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.005141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.005355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.005404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.005557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-10-01 01:53:45.005587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-10-01 01:53:45.005764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.005833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.006948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.006975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.007146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.007173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.007332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.007362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.007491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.007535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.007722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.007751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.007930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.007960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.008124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.008152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.008268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.008310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.008491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.008521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.008706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.008737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.008902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.008930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.009072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.009099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.009269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.009295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.009442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.009472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.009626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.009724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.009836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.009866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.010063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.010090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.010203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.010230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.010394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.010420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.010535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.010579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.010730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.010759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.010890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.010920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.011075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.011103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.011265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.011313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.011501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.011544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.011733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.011762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.011936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.011965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.012100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.012127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.012239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.012266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.012414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.012441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-10-01 01:53:45.012642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-10-01 01:53:45.012671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.012799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.012829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.012975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.013020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.013156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.013182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.013361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.013400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.013589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.013621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.013798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.013825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.013981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.014019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.014178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.014205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.014326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.014353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.014517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.014543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.014725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.014753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.014954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.014984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.015145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.015172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.015313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.015339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.015450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.015477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.015587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.015613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.015745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.015777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.015961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.015989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.016156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.016183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.016346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.016376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.016535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.016562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.016696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.016739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.016925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.016952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.017104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.017132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.017270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.017314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.017440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.017470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.017650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.017677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.017854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.017884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.018057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.018084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.018257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.018284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.018459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.018489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.018666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.018696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.018828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.018855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.018972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.019005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.019169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.019196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.019310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.019336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.019479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-10-01 01:53:45.019506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-10-01 01:53:45.019648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.019690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.019874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.019903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.020107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.020148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.020270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.020298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.020409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.020436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.020550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.020577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.020734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.020772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.020931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.020958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.021106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.021134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.021251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.021277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.021416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.021443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.021623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.021653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.021836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.021862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.022965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.022991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.023140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.023167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.023328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.023355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.023492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.023519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.023627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.023669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.023827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.023856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.023992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.024024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.024171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.024198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.024358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.024387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.024565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.024592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.024725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.024769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.024922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.024951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.025087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.025114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.025256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-10-01 01:53:45.025282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-10-01 01:53:45.025421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.025451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.025585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.025612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.025782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.025826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.025986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.026019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.026183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.026210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.026367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.026397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.026563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.026593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.026746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.026773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.026916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.026943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.027085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.027112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.027241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.027268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.027395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.027422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.027562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.027607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.027768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.027800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.027957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.027987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.028153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.028180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.028330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.028357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.028489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.028516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.028681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.028711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.028870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.028897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.029033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.029061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.029181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.029208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.029377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.029404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.029532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.029577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.029728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.029758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.029922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.029949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.030111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.030139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.030260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.030287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.030450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.030477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.030661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.030690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.030811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.030840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.031008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.031035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.031213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.031243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.031395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.031424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.031579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.031606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.031744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.031771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.031939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-10-01 01:53:45.031968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-10-01 01:53:45.032129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.032156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.032285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.032312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.032450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.032480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.032642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.032668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.032776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.032803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.032970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.033007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.033171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.033198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.033337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.033364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.033503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.033529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.033665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.033691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.033820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.033850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.034009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.034037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.034152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.034179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.034318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.034363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.034516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.034545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.034708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.034734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.034842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.034873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.035067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.035097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.035261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.035287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.035421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.035448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.035598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.035627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.035764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.035792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.035930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.035957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.036127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.036154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.036293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.036320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.036474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.036504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.036652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.036682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.036820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.036847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.036978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.037020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.037131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.037157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.037302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.037329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.037544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.037572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.037754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.037784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.037945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.037971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.038121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.038149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.038300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.038330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.038510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-10-01 01:53:45.038536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-10-01 01:53:45.038670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.038714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.038870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.038898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.039084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.039111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.039224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.039251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.039359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.039385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.039519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.039547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.039686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.039713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.039833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.039859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.040018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.040046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.040180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.040223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.040332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.040361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.040500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.040527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.040662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.040689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.040852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.040882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.041020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.041047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.041212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.041238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.041387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.041416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.041550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.041576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.041716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.041743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.041882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.041927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.042086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.042113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.042253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.042280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.042470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.042499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.042636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.042662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.042799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.042826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.042971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.043006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.043138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.043165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.043330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.043357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.043559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.043587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.043770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.043797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.043905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.043932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.044098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.044125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.044242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.044268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.044403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.044429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.044598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.044627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.044814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.044840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.044970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-10-01 01:53:45.045021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-10-01 01:53:45.045168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.045197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.045378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.045405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.045560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.045590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.045758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.045786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.045919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.045946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.046108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.046135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.046312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.046341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.046502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.046529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.046691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.046733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.046853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.046883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.047016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.047044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.047208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.047235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.047388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.047431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.047554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.047581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.047714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.047741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.047878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.047908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.048092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.048120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.048304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.048333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.048483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.048513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.048671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.048697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.048810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.048837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.048994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.049044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.049157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.049188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.049326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.049371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.049489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.049519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.049704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.049731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.049889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.049918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.050075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.050105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.050239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.050266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.050403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.050430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.050594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.050623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.050777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.050803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.050943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.050970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.051102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.051129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.051258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.051284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.051415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.051460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.051607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.051636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-10-01 01:53:45.051791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-10-01 01:53:45.051817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.051946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.051991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.052156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.052182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.052344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.052370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.052516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.052545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.052669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.052698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.052830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.052857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.052965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.052992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.053201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.053231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.053353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.053380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.053490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.053517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.053679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.053706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.053848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.053888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.054061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.054092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.054235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.054263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.054427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.054455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.054598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.054627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.054763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.054808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.054972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.055008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.055172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.055200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.055307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.055334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.055443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.055470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.055638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.055666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.055808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.055835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.055977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.056012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.056172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.056221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.056383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.056428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.056583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.056612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.056768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.056795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.056933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.056961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.057136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.057181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.057343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.057387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.057591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.057619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.057786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.057813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.057947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.057974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.058165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.058214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.058392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.058419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.058557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.058584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.058725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-10-01 01:53:45.058752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-10-01 01:53:45.058901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.058930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.059083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.059124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.059261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.059289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.059406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.059433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.059605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.059632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.059771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.059798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.059909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.059935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.060078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.060109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.060229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.060259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.060413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.060442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.060651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.060709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.060821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.060850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.061004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.061051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.061212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.061247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.061396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.061426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.061537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.061566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.061748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.061796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.061932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.061959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.062123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.062169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.062353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.062406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.062593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.062637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.062782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.062808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.062974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.063008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.063152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.063178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.063339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.063368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.063542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.063571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.063716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.063745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.063890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.063920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.064068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.064095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.064212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.064238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.064429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.064458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.064595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.064640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.064786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.064816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.064959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.064988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.065182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.065209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.065368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.065397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.065516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.065545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.065677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.065721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.065905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.065935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.066078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.066106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.066248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.066279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.066416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.066442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.066595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.066624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-10-01 01:53:45.066776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-10-01 01:53:45.066806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.066925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.066954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.067122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.067149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.067276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.067305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.067479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.067508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.067651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.067680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.067830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.067860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.068043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.068070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.068200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.068226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.068362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.068392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.068569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.068598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.068786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.068816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.068959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.068985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.069133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.069159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.069317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.069346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.069526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.069556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.069699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.069742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.069918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.069948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.070140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.070167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.070331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.070359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.070478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.070507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.070772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.070823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.070976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.071012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.071166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.071192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.071306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.071338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.071479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.071525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.071690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.071734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.071928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.071957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.072150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.072177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.072282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.072309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.072510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.072573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.072698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.072726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.072881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.072910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.073085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.073126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.073303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.073332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.073496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.073542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.073717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.073796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.073935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.073964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.074139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.074187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.074313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.074358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.074501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.074547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.074732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.074776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.074940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.074968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.075134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.075179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-10-01 01:53:45.075373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-10-01 01:53:45.075403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.075690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.075740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.075884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.075911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.076050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.076079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.076218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.076250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.076365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.076394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.076506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.076535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.076712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.076745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.076879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.076905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.077049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.077076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.077259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.077288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.077457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.077486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.077611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.077641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.077791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.077821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.077978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.078016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.078146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.078172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.078295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.078325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.078477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.078506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.078646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.078675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.078828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.078857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.079052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.079080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.079184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.079210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.079359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.079388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.079528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.079557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.079697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.079726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.079886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.079915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.080083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.080111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.080294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.080339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.080530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.080575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.080712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.080756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.080870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.080898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.081043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.081072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.081212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.081238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.081354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.081380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.081563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.081596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.081752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.081782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.081933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.081963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.082115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.082147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.082325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.082369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.082523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.082572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.082707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.082750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.082916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.082943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.083074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.083120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.083259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-10-01 01:53:45.083286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-10-01 01:53:45.083399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.083427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.083569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.083595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.083737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.083763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.083883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.083910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.084052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.084084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.084260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.084305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.084492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.084535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.084667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.084694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.084862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.084889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.085046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.085077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.085209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.085239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.085410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.085439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.085563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.085592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.085713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.085743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.085905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.085933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.086073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.086101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.086265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.086295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.086491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.086541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.086731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.086761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.086909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.086936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.087125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.087156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.087287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.087317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.087442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.087472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.087623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.087653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-10-01 01:53:45.087778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-10-01 01:53:45.087807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.087990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.088024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.088207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.088234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.088390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.088420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.088567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.088596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.088773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.088802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.088930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.088959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.089105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.089132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.089303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.089330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.089517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.089547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.089677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.089707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.089918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.089948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.090093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.090122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.090241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.090284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.090410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.090454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.090630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.090659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.090771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.090801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.090953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.090982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.091169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.091195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.091382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.091412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.091551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.091603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.091761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.091790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.091967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.092004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.092165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.092192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.092298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.092340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.092494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.092523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.092705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.092734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.092887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.092915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.093076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.093103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.093268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.093295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.093454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.093483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.093633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.093663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.093834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.093864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.094006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.094033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.094177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.094204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.094373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.094400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.094554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.094584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.094712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.094742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.094986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.095038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.095184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.095210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.095394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.095423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.095558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.095601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.095760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.095789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-10-01 01:53:45.095946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-10-01 01:53:45.095972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.096139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.096166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.096297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.096326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.096479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.096509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.096710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.096744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.096855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.096885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.097076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.097104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.097219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.097245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.097376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.097402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.097531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.097561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.097738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.097767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.097918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.097948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.098109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.098136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.098278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.098304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.098406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.098433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.098599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.098642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.098881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.098911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.099103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.099130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.099316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.099345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.099488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.099583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.099758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.099788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.099912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.099941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.100095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.100136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.100282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.100311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.100477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.100522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.100710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.100753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.100865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.100892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.101044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.101073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.101207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.101235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.101396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.101440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.101629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.101674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.101839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.101872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.102016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.102044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.102171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.102216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.102446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.102505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.102661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.102691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.102844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.102873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.103027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.103055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.103197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.103224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.103387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.103431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.103643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.103673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.103792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.103822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.103974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.104009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.342 [2024-10-01 01:53:45.104166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.342 [2024-10-01 01:53:45.104192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.342 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.104318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.104363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.104523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.104552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.104757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.104786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.104944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.104970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.105116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.105142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.105281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.105308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.105487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.105516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.105657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.105686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.105866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.105896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.106024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.106054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.106204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.106230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.106370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.106397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.106576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.106605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.106731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.106761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.106948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.106981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.107131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.107158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.107293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.107322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.107456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.107482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.107612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.107639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.107792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.107821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.107956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.107982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.108124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.108151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.108260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.108286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.108412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.108442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.108563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.108594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.108752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.108781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.108977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.109011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.109120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.109147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.109275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.109315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.109489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.109546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.109729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.109758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.109921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.109949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.110107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.110153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.110353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.110398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.110539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.110586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.110731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.110765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.110891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.110918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.111040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.111067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.111209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.111235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.111370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.111397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.111537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.111563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.111702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.111733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.111887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.111914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.343 qpair failed and we were unable to recover it. 00:36:05.343 [2024-10-01 01:53:45.112052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.343 [2024-10-01 01:53:45.112079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.344 qpair failed and we were unable to recover it. 00:36:05.344 [2024-10-01 01:53:45.112187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.344 [2024-10-01 01:53:45.112213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.344 qpair failed and we were unable to recover it. 00:36:05.344 [2024-10-01 01:53:45.112366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.344 [2024-10-01 01:53:45.112392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-10-01 01:53:45.112505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-10-01 01:53:45.112533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-10-01 01:53:45.112700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-10-01 01:53:45.112744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-10-01 01:53:45.112868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-10-01 01:53:45.112897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-10-01 01:53:45.113042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-10-01 01:53:45.113074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-10-01 01:53:45.113225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-10-01 01:53:45.113276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-10-01 01:53:45.113414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.113441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.113561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.113589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.113716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.113744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.113859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.113887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.114913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.114942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.115063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.115090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.115247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.115277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.115451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.115496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.115657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.115702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.115846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.115873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.116048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.116092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.116213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.116248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.116378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.116408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.116560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.116589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.116739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.116768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.116912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.116939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.117095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.117125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.117242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.117271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.117428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.117456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.117575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.117604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.117723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.117752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.117911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.117937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.118055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.118085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.118216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.118263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.119573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.119606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.119780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.119829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.119963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.119990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.120123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.120151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.120280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.120325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-10-01 01:53:45.120482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-10-01 01:53:45.120526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.120661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.120693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.120850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.120876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.121020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.121047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.121234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.121284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.121396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.121440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.121565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.121594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.121730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.121756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.121895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.121921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.122031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.122063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.122199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.122225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.122404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.122434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.122551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.122580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.122706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.122735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.122885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.122912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.123025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.123052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.123164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.123191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.123351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.123380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.123529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.123558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.123681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.123711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.123908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.123956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.124101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.124128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.124251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.124278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.124448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.124493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.124658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.124702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.124849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.124876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.125071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.125247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.125384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.125536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.125743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.125883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-10-01 01:53:45.125993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-10-01 01:53:45.126026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.126162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.126208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.126344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.126388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.126522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.126548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.126690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.126718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.126861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.126889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.127047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.127076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.127214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.127240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.127360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.127386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.127563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.127596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.127773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.127803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.127946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.127975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.128141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.128188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.128302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.128329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.128467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.128497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.128651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.128678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.128785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.128813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.128972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.129006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.129147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.129192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.129313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.129358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.129511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.129556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.129696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.129724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.129860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.129887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.129991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.130040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.130176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.130202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.130305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.130331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.130447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.130473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.130651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.130681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.130834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.130864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.131041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.131068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.131201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.131231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.131356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.131386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.131542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.131571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.131797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.131827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.131947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.131976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.132117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.132143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.132268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.132297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.132457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-10-01 01:53:45.132486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-10-01 01:53:45.132611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.132641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.132788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.132821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.132954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.132982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.133111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.133138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.133274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.133319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.133463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.133490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.133601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.133629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.133762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.133790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.133926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.133953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.134067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.134094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.134214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.134240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.134383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.134410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.134517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.134544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.134676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.134723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.134866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.134893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.135924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.135951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.136180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.136208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.136315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.136342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.136455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.136482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.136634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.136663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.136816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.136845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.137015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.137042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.137188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.137217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.137372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.137401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.137582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.137612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.137806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.137852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.137959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-10-01 01:53:45.137987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-10-01 01:53:45.138145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.138196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.138360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.138405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.138569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.138613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.138765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.138810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.138928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.138957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.139103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.139133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.139256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.139285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.139455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.139489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.139655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.139684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.139909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.139938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.140091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.140121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.140280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.140326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.140463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.140509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.140667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.140697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.140835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.140862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.140978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.141133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.141278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.141444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.141642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.141782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.141913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.141939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.142085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.142116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.142253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.142297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.142423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.142452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.142596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.142626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.142777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.142807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.142935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.142966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.143089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.143116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.143235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.143262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.143425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.143454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-10-01 01:53:45.143627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-10-01 01:53:45.143654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.143800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.143827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.143962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.143988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.144141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.144168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.144303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.144330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.144442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.144469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.144630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.144659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.144774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.144803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.144949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.144975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.145088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.145115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.145232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.145258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.145449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.145475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.145700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.145729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.145868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.145894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.146016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.146043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.146259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.146286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.146436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.146465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.146609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.146639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.146792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.146821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.146974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.147009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.147136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.147163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.147278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.147305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.147442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.147483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.147637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.147666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.147836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.147865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.148035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.148062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.148173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.148199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.148322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.148349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.148485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.148528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.148638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.148667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.148813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.148842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.149029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.149072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.149189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.149216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.149353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.149380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.149535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.149564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.149684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.149713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.149889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.149918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.150082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-10-01 01:53:45.150109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-10-01 01:53:45.150227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.150253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.150422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.150448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.150613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.150643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.150755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.150784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.150944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.150971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.151113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.151153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.151294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.151325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.151499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.151530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.151661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.151703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.151877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.151907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.152042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.152071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.152215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.152242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.152406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.152436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.152591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.152625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.152819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.152849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.152964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.152994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.153138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.153164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.153300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.153344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.153574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.153602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.153770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.153819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.153947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.153976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.154137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.154164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.154279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.154306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.154489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.154519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.154643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.154672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.154833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.154862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.155051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.155092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.155252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.155280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.155450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.155480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.155632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.155662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.155820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.155849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.155971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.156003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.156121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.156147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.156257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.156284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.156443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.156469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.156655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.156685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-10-01 01:53:45.156838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-10-01 01:53:45.156868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.157029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.157073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.157184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.157211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.157343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.157379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.157528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.157558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.157689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.157720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.157845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.157876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.158026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.158067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.158194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.158223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.158390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.158435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.158591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.158635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.158795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.158840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.158969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.159007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.159190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.159236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.159398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.159442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.159626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.159656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.159809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.159836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.159949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.159976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.160108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.160148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.160747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.160786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.160940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.160971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.161100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.161128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.161237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.161264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.161402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.161430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.161576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.161604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.161746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.161773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.161912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.161940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.162102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.162130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.162272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.162315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.162502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.162546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.162687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.162715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.162875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.162902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.163066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.163113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.163270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.163298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.163460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.163505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.163644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.163672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.163813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.163840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.163972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-10-01 01:53:45.164004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-10-01 01:53:45.164158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.164203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.164383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.164431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.164625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.164652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.164791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.164818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.164962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.164989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.165151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.165201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.165356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.165384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.165537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.165583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.165750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.165777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.165903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.165942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.166094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.166127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.166361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.166391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.166541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.166590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.166743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.166791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.166933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.166963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.167089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.167116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.167262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.167291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.167445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.167475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.167594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.167623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.167778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.167807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.167933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.167963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.168109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.168139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.168278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.168322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.168517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.168545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.168700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.168750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.168905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.168932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.169069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.169109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.169265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.169311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.169531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.169560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.169703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.169751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.169903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.169932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.170075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.170102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.170220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.170251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.170431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.170476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.170602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.170646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.170765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.170794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.170926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.170955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-10-01 01:53:45.171107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-10-01 01:53:45.171147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.171333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.171362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.171530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.171574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.171739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.171783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.171925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.171952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.172075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.172103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.172233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.172279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.172454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.172482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.172681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.172716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.172846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.172873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.173019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.173047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.173165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.173192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.173336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.173363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.173498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.173525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.173689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.173716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.173858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.173886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.174045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.174075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.174206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.174239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.174375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.174419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.174601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.174649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.174842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.174872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.175082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.175228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.175383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.175567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.175721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.175884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.175987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.176021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.176154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.176183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.176309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.176340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.176462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.176491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.176659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.176689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.176818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-10-01 01:53:45.176845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-10-01 01:53:45.176952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.176978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.177131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.177172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.177347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.177378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.177609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.177639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.177787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.177814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.177975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.178015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.178150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.178178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.178297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.178325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.178527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.178556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.178685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.178729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.178856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.178885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.179053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.179196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.179361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.179551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.179725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.179868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.179986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.180021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.180151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.180178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.180311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.180341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.180461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.180490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.180646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.180675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.180819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.180848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.180971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.181007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.181157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.181198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.181387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.181433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.181573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.181618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.181809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.181839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.181964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.181991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.182140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.182184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.182397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.182433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.182667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.182715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.182888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.182918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.183058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.183086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.183185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.183211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.183339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.183368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.183511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-10-01 01:53:45.183541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-10-01 01:53:45.183732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.183758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.183947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.183977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.184118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.184146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.184322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.184351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.184523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.184553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.184770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.184840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.184951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.184989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.185128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.185155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.185283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.185309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.185473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.185499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.185613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.185640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.185764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.185792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.185905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.185932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.186071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.186098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.186213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.186241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.186374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.186403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.186566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.186595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.186744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.186774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.186901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.186927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.187065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.187093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.187201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.187228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.187470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.187499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.187657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.187687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.187826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.187855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.188016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.188043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.188176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.188202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.188330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.188359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.188490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.188535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.188665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.188694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.188857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.188886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.189041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.189068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.189202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.189229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.189391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.189420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.189572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.189606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.189788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.189818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.189966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.189995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.190167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-10-01 01:53:45.190193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-10-01 01:53:45.190325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.190368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.190516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.190544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.190678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.190724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.190899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.190929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.191082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.191109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.191225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.191251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.191384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.191414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.191569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.191598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.191748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.191778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.191923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.191953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.192092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.192120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.192227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.192254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.192364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.192390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.192509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.192538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.192714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.192744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.192960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.192987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.193106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.193132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.193242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.193268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.193371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.193414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.193567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.193595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.193719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.193750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.193924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.193954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.194092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.194119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.194332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.194359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.194521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.194551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.194702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.194732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.194883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.194913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.195061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.195088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.195205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.195231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.195409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.195439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.195585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.195614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.195746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.195775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.195910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.195950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.196110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.196140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.196262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.196309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.196488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.196522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.196681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.196708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-10-01 01:53:45.196852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-10-01 01:53:45.196880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.197031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.197199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.197369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.197523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.197693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.197862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.197987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.198022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.198144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.198173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.198323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.198370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.198525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.198570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.198754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.198799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.198939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.198966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.199094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.199140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.199264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.199294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.199419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.199448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.199656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.199690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.199859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.199889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.200042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.200069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.200203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.200234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.200394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.200423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.200548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.200577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.200701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.200730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.200852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.200883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.201056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.201083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.201215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.201244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.201476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.201507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.201666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.201708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.201846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.201877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.202016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.202044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.202163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.202190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.202411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.202440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.202654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.202684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.202806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.202837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.203083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.203110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.203247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.203277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.203424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.203453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.203566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-10-01 01:53:45.203595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-10-01 01:53:45.203745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.203775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.203892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.203921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.204083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.204112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.204233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.204261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.204373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.204401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.204566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.204596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.204808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.204837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.204952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.204982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.205127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.205154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.205298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.205327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.205473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.205502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.205645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.205674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.205800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.205829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.205962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.206025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.206169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.206198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.206368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.206397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.206524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.206554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.206679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.206709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.206857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.206897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.207061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.207237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.207378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.207525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.207693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.207858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.207985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.208032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.208194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.208225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.208366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.208399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.208581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.208630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.208755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.208790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.208922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.208950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.209100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.209147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.209301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-10-01 01:53:45.209346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-10-01 01:53:45.209506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.209551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.209688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.209735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.209877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.209904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.210072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.210119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.210238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.210265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.210416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.210443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.210602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.210646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.210809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.210849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.210985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.211019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.211165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.211195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.211331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.211361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.211504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.211534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.211649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.211679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.211861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.211889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.212007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.212035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.212160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.212205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.212393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.212438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.212604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.212649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.212786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.212814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.212955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.212982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.213108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.213135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.213245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.213271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.213482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.213530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.213681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.213715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.213879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.213906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.214074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.214102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.214207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.214234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.214393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.214423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.214629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.214676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.214792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.214822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.214940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.214970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.215132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.215172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.215331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.215365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.215501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.215546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.215710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.215755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.215896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.215923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.216033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-10-01 01:53:45.216061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-10-01 01:53:45.216204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.216248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.216394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.216438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.216567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.216612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.216720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.216747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.216873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.216912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.217948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.217975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.218090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.218116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.218247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.218281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.218425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.218455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.218631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.218678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.218822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.218850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.219008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.219038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.219184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.219230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.219425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.219470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.219610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.219653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.219769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.219796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.219936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.219964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.220121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.220167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.220282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.220309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.220446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.220473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.220616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.220642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.220801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.220829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.220939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.220966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.221094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.221120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.221234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.221263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.221366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.221393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.221527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.221553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.221663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.221689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.221857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.221883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.222032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.222059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.222169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-10-01 01:53:45.222197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-10-01 01:53:45.222319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.222346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.222485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.222512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.222667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.222713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.222856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.222884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.223057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.223088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.223219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.223247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.223424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.223452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.223612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.223639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.223776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.223803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.223944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.223971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.224130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.224263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.224409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.224577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.224720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.224862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.224970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.225121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.225270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.225444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.225609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.225771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.225951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.225978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.226110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.226138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.226257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.226284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.226428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.226456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.226593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.226620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.226760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.226787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.226923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.226951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.227098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.227128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.227259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.227289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.227412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.227440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.227565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.227596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.227756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.227786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.227943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.227971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.228102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.228131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.228270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.228320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-10-01 01:53:45.228513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-10-01 01:53:45.228556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.228710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.228755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.228889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.228916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.229072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.229101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.229220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.229249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.229372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.229402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.229552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.229585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.229713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.229742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.229896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.229925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.230066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.230095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.230228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.230276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.230452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.230495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.230681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.230710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.230872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.230899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.231081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.231127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.231304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.231330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.231466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.231492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.231655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.231681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.231796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.231824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.231966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.231995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.232130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.232155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.232278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.232307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.232435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.232478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.232627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.232655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.232787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.232812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.233014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.233041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.233159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.233184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.233360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.233389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.233504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.233533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.233645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.233674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.233829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.233857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.234023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.234066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.234184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-10-01 01:53:45.234210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-10-01 01:53:45.234351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.234391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.234539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.234575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.234734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.234764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.234919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.234945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.235068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.235095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.235214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.235240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.235401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.235437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.235627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.235656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.235806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.235835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.235995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.236162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.236297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.236446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.236626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.236783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.236962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.236988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.237101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.237127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.237260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.237285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.237440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.237468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.237653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.237682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.237861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.237889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.238110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.238137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.238245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.238270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.238387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.238412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.238556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.238585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.238706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.238734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.238878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.238907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.239061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.239092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.239233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.239259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.239503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.239531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.239710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.239738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.239856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.239885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.240029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.240056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.240189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.240215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.240333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.240375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.240562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.240592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.240727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.240752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-10-01 01:53:45.240897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-10-01 01:53:45.240923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.241100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.241126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.241245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.241270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.241385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.241411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.241582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.241611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.241775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.241804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.241957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.241986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.242129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.242155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.242317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.242346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.242465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.242494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.242619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.242649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.242827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.242856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.242981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.243014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.243151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.243178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.243335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.243363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.243541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.243570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.243745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.243774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.243968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.244015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.244171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.244200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.244350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.244393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.244549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.244578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.244744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.244773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.244932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.244960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.245116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.245144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.245260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.245296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.245418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.245446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.245613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.245640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.245775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.245803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.245948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.245974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.246144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.246172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.246283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.246309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.246487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.246513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.246637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.246665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.246792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.246817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.246924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.246949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.247111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.247143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.247318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.247362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.247547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-10-01 01:53:45.247591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-10-01 01:53:45.247738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.247782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.247894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.247921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.248071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.248116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.248242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.248286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.248475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.248504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.248684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.248710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.248872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.248903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.249054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.249085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.249263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.249307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.249460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.249506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.249668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.249695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.249828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.249854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.250045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.250076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.250234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.250263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.250421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.250449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.250596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.250625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.250742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.250771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.250948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.250976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.251162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.251189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.251377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.251424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.251593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.251637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.251800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.251844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.251977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.252029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.252185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.252231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.252384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.252428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.252576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.252621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.252784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.252810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.252978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.253011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.253178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.253207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.253369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.253399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.253577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.253606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.253757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.253785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.253943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.253972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.254131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.254163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.254279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.254306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.254449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.254493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.254619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.254649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-10-01 01:53:45.254803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-10-01 01:53:45.254831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.254946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.254973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.255138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.255183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.255355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.255397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.255556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.255599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.255753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.255796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.255939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.255964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.256110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.256137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.256252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.256281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.256397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.256424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.256574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.256601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.256753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.256782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.256922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.256951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.257109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.257137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.257288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.257318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.257530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.257573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.257748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.257791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.257927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.257954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.258079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.258107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.258238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.258282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.258438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.258482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.258682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.258727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.258842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.258868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.259057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.259085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.259250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.259279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.259431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.259460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.259662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.259690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.259818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.259844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.259979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.260012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.260179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.260207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.260350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.260379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.260554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.260583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.260781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.260830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.260970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.261004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.261117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.261144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.261299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.261347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.261530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.261575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-10-01 01:53:45.261735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-10-01 01:53:45.261778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.261916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.261942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.262088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.262134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.262292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.262335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.262521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.262565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.262695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.262738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.262880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.262906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.263060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.263089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.263193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.263219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.263338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.263364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.263497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.263523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.263663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.263688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.263854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.263880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.264058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.264084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.264222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.264247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.264380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.264406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.264570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.264616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.264770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.264812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.264973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.265004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.265163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.265207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.265373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.265420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.265580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.265624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.265803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.265833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.265959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.265988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.266153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.266179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.266370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.266399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.266623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.266656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.266835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.266864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.267060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.267089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.267249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.267275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.267439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.267482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.267693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.267722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.267862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.267891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-10-01 01:53:45.268010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-10-01 01:53:45.268055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.268160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.268185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.268312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.268338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.268497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.268526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.268697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.268726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.268873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.268901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.269096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.269122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.269229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.269254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.269402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.269444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.269621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.269649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.269830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.269859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.270021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.270065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.270228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.270254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.270439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.270467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.270612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.270640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.270806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.270832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.271004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.271030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.271161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.271187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.271328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.271354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.271527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.271556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.271663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.271692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.271818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.271846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.272014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.272054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.272218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.272250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.272432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.272477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.272661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.272705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.272837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.272864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.273009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.273036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.273221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.273265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.273376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.273404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.273562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.273605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.273748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.273774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.273907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.273933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-10-01 01:53:45.274100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-10-01 01:53:45.274144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.274309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.274340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.274468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.274493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.274631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.274660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.274812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.274838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.275016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.275059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.275219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.275248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.275364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.275392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.275567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.275596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.275780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.275808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.275974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.276009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.276152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.276180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.276353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.276380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.276537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.276581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.276738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.276791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.276931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.276957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.277118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.277149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.277313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.277342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.277460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.277488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.277610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.277638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.277783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.277811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.277965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.277991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.278112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.278137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.278318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.278346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.278520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.278549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.278677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.278720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.278850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.278878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.279042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.279068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.279240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.279266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.279444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.279472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.279633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.279675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.279889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.279918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.280090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.280116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.280227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.280252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.280484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.280513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.280677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.280705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.280843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.280869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.281058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-10-01 01:53:45.281084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-10-01 01:53:45.281256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.281298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.281480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.281509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.281679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.281708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.281853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.281885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.282017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.282061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.282223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.282248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.282400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.282428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.282572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.282600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.282776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.282804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.282961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.282990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.283180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.283206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.283312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.283337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.283523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.283551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.283704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.283733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.283959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.283985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.284133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.284159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.284326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.284368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.284547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.284576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.284699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.284728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.284883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.284913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.285129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.285169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.285316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.285344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.285506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.285551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.285719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.285766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.285933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.285959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.286111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.286140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.286300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.286350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.286536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.286580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.286752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.286797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.286934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.286961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.287110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.287141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.287272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.287300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.287487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.287516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.287690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.287719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.287873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.287901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.288034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.673 [2024-10-01 01:53:45.288063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.673 qpair failed and we were unable to recover it. 00:36:05.673 [2024-10-01 01:53:45.288170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.288197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.288361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.288406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.288596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.288641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.288777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.288803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.288916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.288943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.289074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.289104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.289230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.289259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.289378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.289406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.289552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.289581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.289756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.289785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.289935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.289963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.290154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.290182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.290341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.290385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.290544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.290574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.290775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.290803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.290926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.290953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.291122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.291166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.291362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.291391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.291525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.291572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.291740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.291767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.291905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.291932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.292089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.292138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.292338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.292382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.292567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.292612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.292729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.292755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.292871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.292899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.293064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.293096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.293231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.293259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.293382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.293410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.293558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.293587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.293737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.293766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.293893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.293922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.294123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.294152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.294299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.294327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.294555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.674 [2024-10-01 01:53:45.294584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.674 qpair failed and we were unable to recover it. 00:36:05.674 [2024-10-01 01:53:45.294793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.294838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.294988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.295020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.295182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.295208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.295364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.295408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.295570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.295613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.295770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.295813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.295983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.296017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.296236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.296262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.296464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.296492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.296669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.296698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.296826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.296871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.297008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.297034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.297197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.297222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.297377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.297410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.297577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.297604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.297740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.297769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.297947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.297976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.298164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.298203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.298341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.298374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.298559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.298590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.298753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.298780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.298946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.298972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.299133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.299163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.299460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.299514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.299852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.299905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.675 [2024-10-01 01:53:45.300082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.675 [2024-10-01 01:53:45.300111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.675 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.300296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.300325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.300538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.300567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.300720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.300745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.300883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.300909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.301064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.301093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.301297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.301322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.301520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.301566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.301705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.301731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.301879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.301905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.302095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.302124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.302342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.302403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.302588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.302614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.302729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.302755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.302924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.302950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.303102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.303135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.303287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.303316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.303522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.303551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.303725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.303751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.303968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.303993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.304185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.304214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.304363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.304391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.304558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.304590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.304764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.304790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.304921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.304947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.305111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.305140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.305368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.305424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.305600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.305629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.305748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.305774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.305917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.305943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.306097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.306126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.306289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.306318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.306578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.306623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.676 qpair failed and we were unable to recover it. 00:36:05.676 [2024-10-01 01:53:45.306781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.676 [2024-10-01 01:53:45.306806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.306942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.306967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.307112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.307141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.307359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.307388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.307561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.307608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.307804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.307830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.307988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.308036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.308182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.308210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.308352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.308380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.308593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.308626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.308785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.308811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.308953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.308979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.309182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.309211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.309354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.309383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.309593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.309622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.309776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.309802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.309952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.309978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.310150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.310178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.310323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.310352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.310498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.310526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.310660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.310687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.310823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.310850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.311036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.311066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.311250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.311279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.311459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.311487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.311618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.311644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.311813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.311839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.311984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.312150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.312292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.312431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.312593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.312757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.312945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.312973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.313213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.313239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.313376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.313402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.313538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.313563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.677 [2024-10-01 01:53:45.313674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.677 [2024-10-01 01:53:45.313700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.677 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.313806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.313832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.314043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.314069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.314230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.314256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.314393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.314419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.314631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.314657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.314799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.314824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.314976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.315011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.315166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.315192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.315305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.315332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.315462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.315488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.315600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.315626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.315733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.315759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.315978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.316018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.316165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.316191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.316328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.316354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.316495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.316521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.316629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.316655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.316787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.316817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.316976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.317156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.317289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.317456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.317614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.317774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.317900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.317925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.318946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.318971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.319151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.319178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.319289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.319315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.319450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.319475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.319577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.319602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.678 qpair failed and we were unable to recover it. 00:36:05.678 [2024-10-01 01:53:45.319742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.678 [2024-10-01 01:53:45.319767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.319926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.319955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.320134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.320173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.320345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.320377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.320516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.320542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.320679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.320706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.320868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.320897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.321053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.321080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.321217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.321243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.321380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.321406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.321541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.321566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.321682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.321710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.321852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.321878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.322970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.322995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.323169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.323196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.323323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.323349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.323497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.323522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.323683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.323711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.323850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.323879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.324062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.324089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.324197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.324223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.324367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.324392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.324504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.324529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.324663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.324690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.324844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.324883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.325031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.325061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.325227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.325255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.325456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.325501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.325634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.679 [2024-10-01 01:53:45.325661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.679 qpair failed and we were unable to recover it. 00:36:05.679 [2024-10-01 01:53:45.325800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.325826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.325972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.326005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.326157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.326201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.326392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.326453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.326589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.326615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.326752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.326778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.326897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.326926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.327070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.327099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.327234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.327264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.327426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.327456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.327607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.327635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.327761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.327790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.327948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.327974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.328114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.328140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.328278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.328304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.328436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.328478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.328663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.328691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.328818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.328844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.328978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.329010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.329154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.329179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.329303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.329332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.329454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.329483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.329619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.329648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.329778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.329804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.329986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.330031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.330205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.330233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.330423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.330467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.330617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.330647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.330778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.330804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.330941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.330968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.331116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.331142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.331391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.331443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.680 qpair failed and we were unable to recover it. 00:36:05.680 [2024-10-01 01:53:45.331606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.680 [2024-10-01 01:53:45.331635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.331760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.331789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.331942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.331971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.332154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.332185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.332347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.332408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.332586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.332615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.332795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.332823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.332974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.333010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.333159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.333185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.333314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.333342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.333512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.333540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.333683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.333711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.333857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.333885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.334066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.334092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.334234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.334260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.334423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.334452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.334630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.334659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.334785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.334814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.334956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.334985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.335128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.335154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.335291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.335318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.335501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.335529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.335652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.335680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.335861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.335890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.336057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.336084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.336217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.336242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.336395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.336424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.336567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.336596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.336771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.336799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.336960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.336986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.337137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.337162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.337302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.337327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.337444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.337486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.337638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.337666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.337844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.337872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.338026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.338068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.338201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.338227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.338359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.681 [2024-10-01 01:53:45.338385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.681 qpair failed and we were unable to recover it. 00:36:05.681 [2024-10-01 01:53:45.338488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.338514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.338677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.338705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.338855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.338883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.339021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.339047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.339209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.339235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.339436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.339462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.339645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.339678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.339792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.339821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.340007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.340033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.340173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.340198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.340308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.340333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.340484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.340512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.340691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.340720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.340843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.340872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.341008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.341034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.341174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.341200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.341312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.341354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.341541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.341569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.341746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.341775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.341922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.341950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.342121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.342148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.342262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.342305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.342418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.342447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.342573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.342617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.342773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.342802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.342947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.342975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.343111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.343137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.343273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.343315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.343463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.343491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.343673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.343702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.343857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.343886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.344028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.344073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.344209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.344234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.344350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.344397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.344546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.344575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.344726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.344752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.344885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.344911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.682 [2024-10-01 01:53:45.345067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.682 [2024-10-01 01:53:45.345093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.682 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.345231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.345257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.345370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.345397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.345588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.345617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.345803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.345828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.346016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.346060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.346210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.346235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.346367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.346392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.346508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.346550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.346698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.346726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.346892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.346918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.347057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.347098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.347249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.347278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.347432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.347458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.347595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.347638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.347814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.347843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.347977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.348007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.348159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.348185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.348316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.348357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.348482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.348507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.348644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.348669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.348841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.348884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.349056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.349082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.349264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.349296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.349403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.349432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.349592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.349617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.349755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.349798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.349938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.349967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.350114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.350140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.350278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.350303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.350449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.350475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.350669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.350695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.350837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.350863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.350994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.351032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.351199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.351225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.351422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.351450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.351626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.351655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.351826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.351855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.683 qpair failed and we were unable to recover it. 00:36:05.683 [2024-10-01 01:53:45.352048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.683 [2024-10-01 01:53:45.352074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.352214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.352239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.352385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.352411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.352569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.352597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.352740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.352768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.352893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.352919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.353031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.353057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.353187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.353212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.353346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.353372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.353506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.353532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.353698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.353727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.353878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.353904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.354018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.354044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.354206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.354234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.354365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.354390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.354502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.354528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.354659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.354685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.354830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.354856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.355011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.355040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.355235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.355261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.355374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.355400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.355542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.355568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.355726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.355768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.355902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.355928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.356080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.356106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.356261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.356290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.356449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.356475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.356620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.356645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.356832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.356860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.357026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.357052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.357188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.357214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.357380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.357423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.357578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.357603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.357734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.357775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.357890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.357931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.358102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.358128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.358253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.358283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.684 qpair failed and we were unable to recover it. 00:36:05.684 [2024-10-01 01:53:45.358443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.684 [2024-10-01 01:53:45.358469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.358629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.358655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.358758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.358800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.358950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.358979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.359149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.359175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.359307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.359332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.359504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.359530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.359667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.359692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.359804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.359829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.359934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.359960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.360109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.360135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.360271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.360296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.360456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.360482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.360615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.360640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.360778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.360821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.360975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.361010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.361158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.361190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.361320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.361363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.361522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.361548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.361685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.361711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.361825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.361851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.362015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.362044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.362182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.362208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.362368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.362411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.362552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.362580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.362711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.362737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.362880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.362906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.363084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.363110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.363250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.363275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.363392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.363418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.363552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.363578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.363715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.363741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.363879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.685 [2024-10-01 01:53:45.363923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.685 qpair failed and we were unable to recover it. 00:36:05.685 [2024-10-01 01:53:45.364077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.364106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.364234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.364260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.364402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.364428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.364583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.364611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.364772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.364797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.364981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.365015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.365157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.365186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.365345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.365370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.365555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.365583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.365702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.365730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.365942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.365975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.366129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.366155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.366292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.366317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.366458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.366483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.366599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.366625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.366784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.366813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.366948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.366974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.367096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.367122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.367235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.367261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.367400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.367426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.367557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.367583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.367737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.367766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.367920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.367946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.368059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.368085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.368230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.368256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.368421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.368446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.368584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.368630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.368778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.368807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.368931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.368957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.369133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.369175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.369323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.369352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.369543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.369569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.369696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.369725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.369845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.369873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.370010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.370036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.370151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.370177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.370347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.370376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.686 [2024-10-01 01:53:45.370538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.686 [2024-10-01 01:53:45.370564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.686 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.370700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.370743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.370886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.370914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.371073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.371099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.371239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.371265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.371417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.371458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.371622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.371648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.371759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.371785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.371955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.371980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.372181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.372207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.372341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.372384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.372527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.372555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.372709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.372735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.372884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.372910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.373062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.373104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.373301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.373326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.373487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.373516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.373689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.373717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.373874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.373900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.374047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.374091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.374198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.374227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.374353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.374379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.374529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.374572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.374756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.374782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.374892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.374918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.375084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.375110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.375237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.375279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.375421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.375447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.375596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.375622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.375765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.375790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.375944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.375972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.376111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.376136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.376282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.376308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.376441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.376467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.376598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.376623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.376758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.376784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.376981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.377012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.377129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.377154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.687 qpair failed and we were unable to recover it. 00:36:05.687 [2024-10-01 01:53:45.377313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.687 [2024-10-01 01:53:45.377339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.377472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.377497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.377683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.377711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.377865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.377898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.378055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.378081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.378217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.378258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.378409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.378437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.378601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.378626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.378794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.378836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.379059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.379088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.379252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.379278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.379420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.379446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.379605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.379634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.379820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.379846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.380008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.380038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.380162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.380192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.380333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.380359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.380478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.380504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.380641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.380666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.380776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.380802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.381021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.381050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.381224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.381253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.381385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.381410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.381573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.381615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.381734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.381762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.381946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.381972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.382085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.382110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.382226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.382252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.382420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.382446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.382562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.382587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.382714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.382744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.382902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.382931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.383097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.383123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.383260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.383286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.383461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.383487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.383629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.383672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.383819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.383861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.383969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.688 [2024-10-01 01:53:45.383995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.688 qpair failed and we were unable to recover it. 00:36:05.688 [2024-10-01 01:53:45.384146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.384171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.384330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.384359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.384517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.384542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.384704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.384730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.384915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.384943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.385935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.385960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.386087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.386114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.386254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.386297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.386422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.386448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.386558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.386583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.386727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.386753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.386918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.386944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.387117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.387143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.387321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.387354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.387476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.387501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.387644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.387669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.387806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.387832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.387993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.388025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.388210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.388238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.388361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.388389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.388528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.388553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.388785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.388813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.388936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.388964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.389152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.389178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.389328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.389356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.389509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.389538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.389663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.389689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.689 qpair failed and we were unable to recover it. 00:36:05.689 [2024-10-01 01:53:45.389800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.689 [2024-10-01 01:53:45.389825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.389987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.390035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.390197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.390222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.390363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.390388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.390524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.390550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.390722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.390748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.390890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.390933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.391051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.391080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.391263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.391289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.391472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.391500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.391612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.391640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.391789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.391814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.391951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.391992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.392182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.392208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.392326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.392351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.392461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.392486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.392638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.392666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.392836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.392864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.393066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.393092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.393205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.393230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.393352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.393378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.393488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.393514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.393713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.393739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.393875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.393900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.394023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.394065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.394247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.394273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.394402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.394428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.394612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.394641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.394813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.394841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.394990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.395021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.395155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.395197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.395343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.395372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.395528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.395553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.395733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.395761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.395940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.395968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.396123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.396149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.396295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.396320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.396457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.690 [2024-10-01 01:53:45.396482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.690 qpair failed and we were unable to recover it. 00:36:05.690 [2024-10-01 01:53:45.396611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.396636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.396788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.396814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.396953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.396994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.397189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.397215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.397416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.397445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.397598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.397627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.397753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.397779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.397882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.397908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.398019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.398046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.398212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.398238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.398369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.398398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.398575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.398603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.398739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.398765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.398907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.398933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.399125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.399151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.399286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.399312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.399445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.399493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.399671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.399700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.399825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.399851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.400024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.400050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.400218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.400246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.400376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.400401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.400565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.400607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.400786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.400812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.400973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.401118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.401255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.401411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.401621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.401765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.401953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.401979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.402127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.402152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.402340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.402368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.402530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.402557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.402694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.402737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.402851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.402880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.403098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.403124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.691 [2024-10-01 01:53:45.403260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.691 [2024-10-01 01:53:45.403303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.691 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.403460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.403486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.403623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.403648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.403866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.403894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.404045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.404075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.404233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.404258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.404442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.404475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.404619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.404648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.404833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.404858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.404974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.405004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.405143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.405169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.405274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.405299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.405437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.405463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.405622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.405664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.405815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.405841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.405973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.406011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.406185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.406210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.406373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.406398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.406550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.406579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.406757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.406785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.406947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.406973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.407115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.407141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.407275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.407318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.407480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.407506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.407645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.407688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.407833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.407861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.408051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.408078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.408312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.408340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.408516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.408545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.408670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.408695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.408838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.408864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.409033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.409060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.409171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.409197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.409343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.409369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.409486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.409512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.409676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.409702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.409838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.409864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.410028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.692 [2024-10-01 01:53:45.410071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.692 qpair failed and we were unable to recover it. 00:36:05.692 [2024-10-01 01:53:45.410203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.410228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.410399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.410424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.410591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.410620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.410780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.410805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.410988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.411022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.411197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.411225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.411380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.411406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.411553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.411578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.411685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.411710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.411879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.411905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.412057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.412087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.412243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.412269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.412401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.412427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.412536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.412561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.412765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.412790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.412923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.412949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.413094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.413121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.413258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.413284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.413453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.413479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.413611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.413653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.413767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.413795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.413965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.413994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.414146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.414173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.414339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.414365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.414517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.414543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.414678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.414704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.414839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.414865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.414980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.415013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.415121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.415147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.415363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.415391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.415523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.415548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.415693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.415718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.415839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.415868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.416028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.416054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.416273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.416302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.416482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.416512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.416647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.416678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.416817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.693 [2024-10-01 01:53:45.416843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.693 qpair failed and we were unable to recover it. 00:36:05.693 [2024-10-01 01:53:45.416995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.417028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.417258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.417284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.417424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.417449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.417611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.417637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.417775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.417800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.418019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.418048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.418167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.418196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.418359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.418384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.418526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.418551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.418766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.418795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.418949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.418975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.419091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.419116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.419259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.419285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.419499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.419524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.419710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.419738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.419915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.419943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.420080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.420107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.420247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.420273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.420427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.420456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.420582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.420607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.420719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.420746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.420891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.420920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.421090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.421117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.421256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.421282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.421410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.421438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.421600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.421630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.421771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.421815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.421969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.422031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.422216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.422241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.422388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.422417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.422582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.422608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.422766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.694 [2024-10-01 01:53:45.422791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.694 qpair failed and we were unable to recover it. 00:36:05.694 [2024-10-01 01:53:45.422949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.422978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.423112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.423138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.423255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.423280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.423447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.423472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.423624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.423652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.423839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.423865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.424005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.424032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.424168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.424194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.424329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.424355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.424518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.424544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.424694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.424723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.424856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.424882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.425033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.425060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.425197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.425222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.425357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.425383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.425520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.425545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.425709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.425752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.425879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.425923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.426081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.426107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.426209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.426235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.426381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.426411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.426572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.426601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.426763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.426789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.426919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.426945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.427088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.427114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.695 [2024-10-01 01:53:45.427247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.695 [2024-10-01 01:53:45.427273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.695 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.427418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.427443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.427609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.427635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.427766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.427794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.427950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.427975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.428116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.428142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.428249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.428275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.428441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.428467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.428565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.428591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.428740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.428769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.428918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.428944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.429131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.429160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.429297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.429324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.429459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.429485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.429640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.429668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.429829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.429858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.430016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.430042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.430187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.430213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.430347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.430373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.430508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.430534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.430703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.430746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.430888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.430917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.431067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.431094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.431228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.431254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.431418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.431446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.431570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.431595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.431699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.431724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.431854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.431895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.432057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.432083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.432198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.432223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.432353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.432378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.432513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.432539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.432720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.432748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.432889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.432918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.433070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.433097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.433216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.433241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.433378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.433404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.433536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.696 [2024-10-01 01:53:45.433562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.696 qpair failed and we were unable to recover it. 00:36:05.696 [2024-10-01 01:53:45.433728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.433754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.433897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.433922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.434106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.434132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.434273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.434298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.434406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.434432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.434568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.434594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.434704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.434730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.434841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.434867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.435059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.435085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.435196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.435222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.435320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.435346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.435466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.435494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.435657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.435685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.435833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.435862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.436016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.436042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.436180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.436206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.436338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.436363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.436473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.436499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.436683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.436712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.436866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.436892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.437022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.437048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.437226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.437255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.437396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.437425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.437560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.437586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.437695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.437720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.437878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.437912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.438095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.438121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.438228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.438253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.438380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.438410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.438568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.438593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.438711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.438738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.438836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.438862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.439006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.439032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.439215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.439244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.439395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.439423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.439586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.439612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.439727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.439753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.697 qpair failed and we were unable to recover it. 00:36:05.697 [2024-10-01 01:53:45.439907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.697 [2024-10-01 01:53:45.439935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.440071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.440097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.440243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.440278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.440458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.440484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.440595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.440620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.440854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.440882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.441014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.441043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.441232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.441258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.441368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.441412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.441564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.441593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.441773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.441798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.441953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.441981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.442123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.442148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.442307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.442333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.442487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.442515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.442658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.442690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.442831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.442856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.443035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.443061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.443202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.443228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.443365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.443391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.443556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.443582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.443712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.443737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.443913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.443938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.444133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.444162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.444308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.444336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.444491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.444516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.444620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.444646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.444819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.444847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.444978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.445009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.445157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.445183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.445287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.445313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.445453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.445479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.445633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.445662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.445815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.445843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.446029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.446056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.446190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.446216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.446414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.446442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.446569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.698 [2024-10-01 01:53:45.446595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.698 qpair failed and we were unable to recover it. 00:36:05.698 [2024-10-01 01:53:45.446730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.446755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.446945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.446973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.447137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.447163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.447305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.447348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.447519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.447544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.447688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.447714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.447879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.447905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.448056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.448084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.448218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.448244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.448381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.448424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.448574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.448602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.448768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.448794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.448931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.448956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.449098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.449124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.449259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.449284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.449459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.449485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.449616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.449642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.449774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.449800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.449960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.449988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.450170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.450195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.450338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.450364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.450504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.450529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.450676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.450701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.450837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.450863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.451020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.451047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.451182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.451207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.451338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.451364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.451481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.451522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.451675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.451704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.699 [2024-10-01 01:53:45.451877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.699 [2024-10-01 01:53:45.451905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.699 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.452063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.452089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.452231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.452256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.452438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.452464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.452597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.452622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.452817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.452846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.452979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.453178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.453308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.453445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.453606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.453773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.453942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.453967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.454118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.454145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.454253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.454278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.454390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.454415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.454569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.454599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.454743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.454768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.454956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.454984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.455142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.455168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.455278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.455304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.455473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.455498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.455633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.455658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.455771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.455797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.455905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.455930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.456068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.456094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.456224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.456250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.456362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.456387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.456502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.456527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.456634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.456659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.456838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.456866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.457055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.457188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.457386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.457525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.457680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.457859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.457977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.458007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.700 qpair failed and we were unable to recover it. 00:36:05.700 [2024-10-01 01:53:45.458170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.700 [2024-10-01 01:53:45.458197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.458306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.458332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.458438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.458463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.458577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.458603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.458713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.458739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.458878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.458908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.459086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.459112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.459229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.459254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.459395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.459420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.459535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.459561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.459777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.459803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.459942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.459968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.460108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.460135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.460276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.460303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.460468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.460493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.460633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.460659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.460852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.460880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.461035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.461061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.461162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.461188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.461338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.461364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.461473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.461498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.461601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.461626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.461792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.461818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.462028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.462054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.462217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.462243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.462420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.462445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.462545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.462571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.462684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.462710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.462883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.462908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.463969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.463995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.464108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.464134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.464254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.701 [2024-10-01 01:53:45.464279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.701 qpair failed and we were unable to recover it. 00:36:05.701 [2024-10-01 01:53:45.464440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.464465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.464601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.464627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.464730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.464755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.464886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.464916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.465097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.465123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.465240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.465265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.465400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.465425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.465528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.465553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.465687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.465713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.465840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.465866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.466004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.466030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.466169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.466195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.466336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.466361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.466496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.466522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.466669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.466695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.466831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.466857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.467054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.467081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.467187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.467213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.467323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.467349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.467514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.467539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.467680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.467706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.467839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.467865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.468037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.468181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.468308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.468476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.468668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.468820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.468976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.469011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.469168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.469194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.469337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.469364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.469498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.469524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.469660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.469685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.469819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.469845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.469973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.470004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.470145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.470171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.470330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.470356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.702 qpair failed and we were unable to recover it. 00:36:05.702 [2024-10-01 01:53:45.470520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.702 [2024-10-01 01:53:45.470545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.470684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.470709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.470864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.470892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.471075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.471101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.471263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.471289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.471425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.471450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.471617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.471642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.471780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.471805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.471967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.471992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.472136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.472162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.472387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.472413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.472552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.472577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.472716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.472741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.703 qpair failed and we were unable to recover it. 00:36:05.703 [2024-10-01 01:53:45.472899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.703 [2024-10-01 01:53:45.472927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.473084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.473111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.473247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.473273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.473380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.473406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.473551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.473577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.473708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.473734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.473877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.473903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.474008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.474034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.474174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.474200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.474348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.474374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.474534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.474559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.474664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.474689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.474797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.474828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.475027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.475054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.475216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.475242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.475367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.475392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.475550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.475576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-10-01 01:53:45.475715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-10-01 01:53:45.475741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.475959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.475987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.476160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.476185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.476325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.476350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.476459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.476484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.476629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.476655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.476759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.476785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.476890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.476916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.477062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.477088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.477230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.477255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.477398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.477423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.477558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.477584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.477715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.477741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.477884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.477909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.478059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.478086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.478231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.478256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.478407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.478433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.478565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.478591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.478730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.478755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.478914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.478943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.479102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.479129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.479293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.479318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.479425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.479455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.479564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.479589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.479749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.479775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.479915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.479940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.480107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.480134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.480268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.480294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.480456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.480482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.480614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.480640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.480781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.480807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.480941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.480966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.481132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.481158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.481317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.481343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.481462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.481488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.481650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.481676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.481804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.481833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.482060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-10-01 01:53:45.482086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-10-01 01:53:45.482246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.482271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.482393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.482419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.482559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.482584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.482690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.482715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.482879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.482905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.483085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.483112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.483281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.483306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.483418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.483443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.483552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.483579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.483692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.483718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.483884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.483909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.484060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.484237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.484401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.484589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.484718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.484852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.484993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.485132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.485323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.485487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.485618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.485804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.485936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.485961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.486113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.486139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.486281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.486307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.486426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.486451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.486662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.486687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.486863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.486891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.487080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.487106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.487243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.487269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.487407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.487434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.487546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.487572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.487713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.487739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.487856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.487882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.488058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.488084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.488197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.488223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-10-01 01:53:45.488359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-10-01 01:53:45.488385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.488521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.488546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.488688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.488714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.488876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.488902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.489043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.489069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.489232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.489258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.489388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.489413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.489552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.489578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.489692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.489718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.489834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.489859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.490051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.490077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.490190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.490216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.490336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.490362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.490500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.490526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.490661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.490686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.490788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.490818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.491006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.491032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.491163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.491188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.491302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.491327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.491493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.491518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.491678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.491703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.491866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.491895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.492058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.492084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.492225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.492250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.492387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.492412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.492552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.492578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.492705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.492731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.492842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.492868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.493011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.493037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.493206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.493231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.493347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.493372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.493482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.493507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.493650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.493676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.493814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.493839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.494013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.494040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.494173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.494198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.494310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.494336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.494476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.991 [2024-10-01 01:53:45.494502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.991 qpair failed and we were unable to recover it. 00:36:05.991 [2024-10-01 01:53:45.494607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.494633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.494769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.494795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.494962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.494987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.495160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.495186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.495334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.495364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.495527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.495553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.495690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.495715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.495855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.495881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.495994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.496165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.496327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.496470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.496637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.496765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.496934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.496959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.497127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.497153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.497320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.497346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.497509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.497534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.497669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.497695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.497796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.497822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.497964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.497990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.498167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.498193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.498326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.498352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.498516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.498541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.498678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.498704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.498838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.498867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.499061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.499218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.499406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.499566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.499729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.499862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.499974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.500004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.500107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.500133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.500265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.500290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.500427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.500453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.500614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.992 [2024-10-01 01:53:45.500639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.992 qpair failed and we were unable to recover it. 00:36:05.992 [2024-10-01 01:53:45.500800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.500825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.500980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.501014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.501151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.501177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.501312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.501336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.501475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.501500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.501662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.501687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.501801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.501826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.501991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.502021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.502143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.502168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.502302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.502326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.502467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.502492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.502630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.502655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.502777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.502804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.502969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.503015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.503184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.503211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.503392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.503420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.503625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.503652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.503793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.503822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.503983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.504017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.504173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.504197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.504367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.504393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.504553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.504578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.504747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.504772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.504933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.504957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.505106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.505132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.505295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.505320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.505485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.505510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.505675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.505699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.505860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.505888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.506047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.506073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.506182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.506207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.506350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.993 [2024-10-01 01:53:45.506375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.993 qpair failed and we were unable to recover it. 00:36:05.993 [2024-10-01 01:53:45.506513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.506538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.506700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.506726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.506892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.506917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.507966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.507990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.508113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.508139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.508324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.508351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.508542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.508570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.508698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.508723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.508880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.508908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.509932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.509957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.510126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.510152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.510282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.510307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.510409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.510434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.510550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.510575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.510726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.510750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.510944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.510973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.511211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.511255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.511457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.511512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.511685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.511738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.511878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.511923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.512093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.512121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.512255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.512282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.512443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.512488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.512675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.512718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.512869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.512896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.994 [2024-10-01 01:53:45.513053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.994 [2024-10-01 01:53:45.513084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.994 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.513266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.513309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.513470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.513502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.513666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.513696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.513873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.513903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.514048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.514075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.514207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.514237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.514380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.514412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.514572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.514602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.514780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.514827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.514968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.514994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.515117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.515144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.515276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.515320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.515505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.515550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.515742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.515771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.515919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.515945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.516086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.516113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.516246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.516289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.516443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.516486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.516671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.516710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.516865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.516892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.517076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.517105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.517260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.517289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.517415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.517444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.517578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.517620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.517776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.517802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.517944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.517970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.518083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.518109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.518272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.518300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.518446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.518475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.518652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.518680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.518824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.518852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.518980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.519025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.519206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.519231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.519388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.519429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.519582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.519610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.519760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.519789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.519965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.995 [2024-10-01 01:53:45.519991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.995 qpair failed and we were unable to recover it. 00:36:05.995 [2024-10-01 01:53:45.520173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.520198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.520352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.520381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.520505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.520551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.520676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.520705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.520897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.520941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.521111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.521151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.521313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.521358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.521485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.521529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.521657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.521701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.521825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.521853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.521991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.522024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.522163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.522189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.522341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.522366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.522476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.522502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.522631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.522670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.522843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.522871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.523013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.523042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.523237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.523284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.523443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.523487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.523641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.523685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.523798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.523825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.523971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.524006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.524196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.524245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.524398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.524428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.524670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.524721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.524852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.524881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.525012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.525039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.525204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.525230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.525367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.525393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.525521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.525566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.525744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.525771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.525901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.525928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.526092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.526135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.526269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.526296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.526410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.526436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.526549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.526575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.526745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.526771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.996 qpair failed and we were unable to recover it. 00:36:05.996 [2024-10-01 01:53:45.526905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.996 [2024-10-01 01:53:45.526932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.527081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.527247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.527403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.527558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.527722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.527872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.527986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.528022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.528212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.528241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.528397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.528426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.528572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.528603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.528808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.528853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.529013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.529061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.529245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.529289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.529450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.529493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.529647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.529692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.529836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.529863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.530003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.530029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.530220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.530249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.530400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.530429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.530554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.530582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.530722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.530751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.530881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.530908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.531051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.531078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.531237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.531263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.531432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.531461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.531629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.531657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.531884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.531913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.532064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.532091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.532224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.532249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.532379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.532408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.532556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.532584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.532763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.997 [2024-10-01 01:53:45.532792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.997 qpair failed and we were unable to recover it. 00:36:05.997 [2024-10-01 01:53:45.532908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.532937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.533095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.533121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.533242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.533285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.533467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.533496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.533645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.533673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.533863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.533891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.534051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.534082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.534195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.534220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.534391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.534417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.534548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.534578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.534725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.534753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.534942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.534971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.535109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.535136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.535247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.535273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.535388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.535415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.535580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.535609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.535758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.535786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.535947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.535976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.536150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.536176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.536337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.536366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.536490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.536518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.536697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.536725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.536850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.536875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.537020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.537047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.537149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.537174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.537322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.537361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.537563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.537610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.537796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.537839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.538010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.538038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.538153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.538180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.538375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.538418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.538599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.538649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.538840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.538870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.538986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.539031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.539195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.539239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.539428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.539471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.539630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.998 [2024-10-01 01:53:45.539674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.998 qpair failed and we were unable to recover it. 00:36:05.998 [2024-10-01 01:53:45.539815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.539841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.539978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.540013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.540180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.540209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.540374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.540418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.540616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.540670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.540804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.540830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.540991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.541024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.541155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.541198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.541403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.541446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.541612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.541655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.541803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.541830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.541953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.541980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.542202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.542246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.542408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.542464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.542627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.542656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.542815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.542844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.542957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.542986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.543136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.543161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.543374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.543403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.543556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.543585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.543729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.543757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.543932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.543960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.544122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.544148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.544264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.544303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.544495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.544524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.544704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.544734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.544885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.544912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.545063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.545089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.545194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.545219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.545355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.545384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.545621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.545650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.545770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.545799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.545911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.545938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.546081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.546108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.546210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.546236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.546387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.546412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.546536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.546565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:05.999 [2024-10-01 01:53:45.546724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.999 [2024-10-01 01:53:45.546753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:05.999 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.546908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.546936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.547140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.547170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.547360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.547407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.547575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.547618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.547782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.547826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.547964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.547990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.548132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.548162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.548306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.548351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.548535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.548580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.548768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.548796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.548972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.549016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.549172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.549219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.549381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.549429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.549616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.549660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.549796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.549822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.549961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.549988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.550160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.550205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.550371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.550400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.550579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.550622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.550747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.550791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.550921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.550948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.551110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.551154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.551296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.551327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.551485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.551511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.551699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.551743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.551875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.551902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.552072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.552118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.552255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.552300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.552486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.552530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.552689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.552717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.552848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.552873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.553009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.553037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.553192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.553236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.553430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.553459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.553614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.553640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.553768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.553795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.553896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.000 [2024-10-01 01:53:45.553921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.000 qpair failed and we were unable to recover it. 00:36:06.000 [2024-10-01 01:53:45.554082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.554111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.554267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.554293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.554415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.554443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.554583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.554610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.554718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.554743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.554883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.554908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.555116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.555160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.555318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.555349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.555508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.555538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.555691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.555720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.555844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.555873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.556040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.556070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.556253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.556280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.556412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.556441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.556617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.556664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.556797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.556827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.556966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.557014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.557185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.557228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.557412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.557457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.557642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.557685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.557849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.557874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.558061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.558106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.558260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.558306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.558425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.558452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.558652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.558694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.558838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.558864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.559017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.559044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.559236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.559280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.559415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.559459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.559632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.559659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.559801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.559828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.559991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.560026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.560182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.560235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.560402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.560448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.560599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.560642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.560782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.560808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.560947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.560973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.001 qpair failed and we were unable to recover it. 00:36:06.001 [2024-10-01 01:53:45.561131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.001 [2024-10-01 01:53:45.561175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.561310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.561358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.561517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.561560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.561696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.561723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.561835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.561861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.562012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.562038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.562154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.562184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.562365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.562410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.562542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.562583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.562769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.562944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.562970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.563170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.563217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.563373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.563402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.563607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.563636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.563813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.563839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.563975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.564008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.564195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.564238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.564371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.564400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.564566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.564615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.564756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.564783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.564920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.564947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.565087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.565113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.565257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.565284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.565416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.565442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.565547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.565573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.565712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.565738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.565852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.565877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.566013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.566040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.566194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.566237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.566409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.566455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.566590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.566617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.566784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.566810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.566977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.567024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.002 [2024-10-01 01:53:45.567194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.002 [2024-10-01 01:53:45.567236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.002 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.567428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.567475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.567578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.567604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.567739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.567765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.567902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.567928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.568100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.568145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.568339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.568383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.568541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.568584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.568728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.568754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.568891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.568917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.569077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.569123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.569309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.569352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.569521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.569565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.569699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.569726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.569860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.569887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.570009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.570036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.570205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.570232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.570428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.570472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.570675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.570701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.570839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.570865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.571038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.571066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.571226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.571268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.571434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.571482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.571587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.571614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.571774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.571801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.571968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.572011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.572198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.572242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.572400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.572429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.572611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.572654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.572824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.572851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.573033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.573078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.573202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.573232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.573433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.573461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.573608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.573634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.573752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.573779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.573909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.573934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.574099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.574125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.003 [2024-10-01 01:53:45.574264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.003 [2024-10-01 01:53:45.574289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.003 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.574491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.574517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.574660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.574687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.574797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.574824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.574944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.574970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.575165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.575209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.575399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.575428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.575612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.575638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.575780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.575806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.575949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.575975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.576113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.576157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.576313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.576358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.576514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.576558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.576697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.576723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.576841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.576867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.577066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.577110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.577282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.577326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.577450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.577481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.577660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.577688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.577806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.577833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.577984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.578020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.578154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.578179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.578327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.578358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.578506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.578532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.578690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.578736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.578898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.578924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.579085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.579112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.579267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.579311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.579473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.579516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.579681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.579724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.579860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.579886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.580057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.580100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.580261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.580292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.580441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.580470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.580648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.580675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.580814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.580839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.580985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.581025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.581203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.004 [2024-10-01 01:53:45.581231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.004 qpair failed and we were unable to recover it. 00:36:06.004 [2024-10-01 01:53:45.581387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.581415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.581562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.581590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.581743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.581772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.581904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.581929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.582073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.582113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.582268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.582297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.582471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.582501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.582659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.582689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.582868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.582898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.583121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.583149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.583306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.583336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.583554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.583583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.583734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.583762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.583883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.583911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.584087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.584113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.584255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.584280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.584434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.584461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.584614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.584648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.584802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.584830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.584970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.585006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.585139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.585164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.585274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.585307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.585496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.585524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.585653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.585694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.585855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.585881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.586016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.586041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.586182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.586207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.586393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.586422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.586573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.586601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.586811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.586840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.587002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.587045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.587196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.587236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.587477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.587508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.587691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.587722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.587871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.587902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.588037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.588065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.588209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.588236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.005 [2024-10-01 01:53:45.588409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.005 [2024-10-01 01:53:45.588439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.005 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.588591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.588617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.588754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.588799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.588947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.588977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.589123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.589150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.589287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.589329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.589523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.589550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.589702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.589735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.589879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.589906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.590055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.590094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.590237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.590265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.590382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.590408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.590580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.590607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.590712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.590738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.590877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.590902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.591104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.591136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.591324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.591351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.591539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.591569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.591754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.591781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.591919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.591945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.592056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.592083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.592229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.592256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.592392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.592418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.592592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.592619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.592781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.592812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.592963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.592992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.593127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.593152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.593263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.593288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.593397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.593421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.593601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.593628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.593779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.593807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.593970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.594004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.594175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.594200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.594309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.594334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.594496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.594527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.594674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.594699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.594832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.594858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.595010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.006 [2024-10-01 01:53:45.595054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.006 qpair failed and we were unable to recover it. 00:36:06.006 [2024-10-01 01:53:45.595167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.595193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.595310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.595337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.595499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.595525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.595663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.595705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.595850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.595877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.596008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.596033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.596176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.596201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.596346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.596391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.596546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.596572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.596726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.596755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.596906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.596936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.597068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.597096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.597207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.597234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.597405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.597431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.597566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.597593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.597749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.597779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.597988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.598028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.598206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.598232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.598392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.598420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.598568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.598597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.598723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.598749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.598869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.598895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.599071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.599096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.599258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.599291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.599424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.599453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.599605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.599633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.599871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.599898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.600031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.600056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.600217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.600245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.600418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.600478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.600763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.600817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.600979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.601046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.601193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.007 [2024-10-01 01:53:45.601221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.007 qpair failed and we were unable to recover it. 00:36:06.007 [2024-10-01 01:53:45.601342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.601370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.601535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.601562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.601699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.601725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.601892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.601937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.602116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.602160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.602338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.602364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.602505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.602530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.602648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.602675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.602887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.602913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.603062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.603230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.603401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.603564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.603704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.603838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.603976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.604007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.604142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.604167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.604308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.604339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.604447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.604473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.604612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.604637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.604799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.604825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.605019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.605062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.605197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.605222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.605371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.605396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.605510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.605536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.605702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.605727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.605868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.605892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.606048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.606078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.606201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.606229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.606360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.606384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.606520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.606545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.606712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.606740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.606863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.606888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.607058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.607102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.607248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.607276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.607437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.607462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.607647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.607676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.008 [2024-10-01 01:53:45.607826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.008 [2024-10-01 01:53:45.607855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.008 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.607985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.608015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.608160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.608185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.608385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.608413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.608570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.608596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.608711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.608736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.608922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.608951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.609110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.609140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.609243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.609268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.609450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.609478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.609612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.609637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.609788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.609814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.610010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.610039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.610197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.610222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.610339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.610364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.610532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.610557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.610689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.610714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.610854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.610878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.611012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.611039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.611212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.611237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.611360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.611385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.611526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.611555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.611713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.611738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.611855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.611881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.612063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.612089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.612253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.612278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.612448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.612477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.612635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.612663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.612794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.612820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.613004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.613046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.613171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.613198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.613369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.613396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.613572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.613600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.613753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.613780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.613932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.613958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.614112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.614139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.614279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.614313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.614444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.614471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.009 qpair failed and we were unable to recover it. 00:36:06.009 [2024-10-01 01:53:45.614584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.009 [2024-10-01 01:53:45.614625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.614799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.614828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.614960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.614985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.615167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.615193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.615341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.615395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.615581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.615606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.615753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.615782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.615960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.616015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.616136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.616162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.616277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.616303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.616435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.616474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.616682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.616710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.616920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.616968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.617160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.617188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.617359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.617386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.617549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.617579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.617749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.617777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.617941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.617972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.618140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.618168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.618276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.618304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.618421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.618448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.618594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.618639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.618816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.618847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.618979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.619020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.619201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.619228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.619410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.619439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.619571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.619599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.619739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.619765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.619931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.619960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.620158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.620186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.620370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.620399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.620545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.620574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.620706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.620734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.620876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.620902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.621096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.621140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.621298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.621325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.621440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.621465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.621605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.010 [2024-10-01 01:53:45.621633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.010 qpair failed and we were unable to recover it. 00:36:06.010 [2024-10-01 01:53:45.621792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.621817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.621958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.621983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.622186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.622214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.622358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.622385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.622578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.622607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.622786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.622814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.622953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.622981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.623165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.623210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.623368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.623397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.623535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.623562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.623737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.623779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.623904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.623934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.624088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.624120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.624261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.624287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.624455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.624499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.624631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.624658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.624815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.624845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.625002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.625047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.625160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.625187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.625306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.625332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.625473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.625500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.625642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.625670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.625859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.625889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.626011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.626041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.626171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.626198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.626341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.626368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.626514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.626543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.626734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.626760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.626921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.626951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.627113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.627143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.627280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.627307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.627426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.627453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.627596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.011 [2024-10-01 01:53:45.627622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.011 qpair failed and we were unable to recover it. 00:36:06.011 [2024-10-01 01:53:45.627729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.627755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.627910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.627955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.628142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.628172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.628341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.628368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.628542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.628572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.628752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.628781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.628948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.628975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.629146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.629176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.629360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.629389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.629550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.629577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.629694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.629721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.629836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.629863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.630006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.630033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.630218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.630247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.630407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.630436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.630624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.630650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.630764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.630809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.630960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.630989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.631138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.631165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.631273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.631307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.631460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.631487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.631621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.631648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.631832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.631861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.632024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.632055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.632209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.632236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.632349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.632393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.632510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.632539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.632692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.632719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.632858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.632884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.633054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.633099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.633264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.633291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.633402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.633429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.633581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.633608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.633750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.633777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.633929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.633959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.634119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.634149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.634342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.012 [2024-10-01 01:53:45.634368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.012 qpair failed and we were unable to recover it. 00:36:06.012 [2024-10-01 01:53:45.634480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.634524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.634646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.634676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.634834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.634860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.635004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.635031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.635219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.635248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.635405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.635432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.635575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.635620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.635771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.635800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.635957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.635983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.636111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.636155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.636312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.636341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.636503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.636530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.636669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.636695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.636846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.636875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.636990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.637049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.637167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.637195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.637384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.637413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.637571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.637597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.637766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.637811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.638008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.638035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.638145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.638173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.638342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.638385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.638526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.638560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.638683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.638710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.638858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.638886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.639046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.639074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.639237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.639264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.639477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.639527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.639671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.639701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.639858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.639885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.640109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.640139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.640317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.640347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.640503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.640529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.640665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.640709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.640861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.640891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.641036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.641064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.641182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.641209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.013 [2024-10-01 01:53:45.641450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.013 [2024-10-01 01:53:45.641479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.013 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.641634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.641661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.641778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.641805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.641945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.641972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.642136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.642163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.642303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.642329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.642466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.642494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.642699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.642726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.642896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.642940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.643105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.643132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.643277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.643311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.643502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.643531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.643690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.643721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.643855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.643881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.644047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.644075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.644225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.644252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.644417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.644444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.644623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.644653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.644879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.644909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.645071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.645098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.645258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.645288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.645464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.645494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.645649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.645676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.645822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.645866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.646050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.646078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.646242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.646274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.646411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.646441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.646604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.646631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.646773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.646800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.646917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.646944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.647074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.647104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.647229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.647256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.647396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.647422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.647582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.647611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.647792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.647818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.647974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.648009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.648161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.648190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.014 [2024-10-01 01:53:45.648358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.014 [2024-10-01 01:53:45.648384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.014 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.648555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.648583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.648773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.648802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.648952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.648982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.649151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.649177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.649370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.649400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.649583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.649610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.649725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.649753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.649867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.649893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.650030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.650058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.650245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.650274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.650424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.650454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.650608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.650634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.650750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.650777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.650915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.650945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.651136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.651163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.651345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.651374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.651495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.651525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.651686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.651712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.651830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.651856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.652017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.652044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.652242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.652268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.652387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.652413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.652545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.652572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.652707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.652734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.652886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.652915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.653061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.653091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.653224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.653251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.653391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.653421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.653584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.653629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.653800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.653826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.653971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.654013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.654151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.654177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.654312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.654339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.654465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.654509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.654660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.654689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.654843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.654872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.655049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.015 [2024-10-01 01:53:45.655076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.015 qpair failed and we were unable to recover it. 00:36:06.015 [2024-10-01 01:53:45.655240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.655267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.655404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.655430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.655541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.655568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.655730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.655756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.655930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.655956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.656081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.656108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.656234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.656261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.656398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.656425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.656578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.656607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.656768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.656795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.656933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.656959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.657140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.657168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.657349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.657375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.657553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.657580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.657756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.657786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.657968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.658005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.658165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.658192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.658422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.658474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.658624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.658654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.658810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.658837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.658953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.658979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.659150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.659180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.659311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.659337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.659503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.659529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.659659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.659688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.659848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.659875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.660061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.660091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.660248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.660278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.660438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.660464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.660596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.660640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.660816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.660850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.661032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.661059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.661182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.016 [2024-10-01 01:53:45.661208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.016 qpair failed and we were unable to recover it. 00:36:06.016 [2024-10-01 01:53:45.661395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.661425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.661575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.661601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.661741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.661767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.661909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.661943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.662091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.662117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.662268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.662297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.662417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.662446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.662633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.662659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.662767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.662810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.662957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.662986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.663124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.663151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.663383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.663412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.663588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.663617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.663767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.663794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.663936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.663963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.664158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.664185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.664402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.664428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.664595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.664625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.664775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.664804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.665026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.665053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.665242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.665271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.665420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.665450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.665607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.665634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.665812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.665842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.666051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.666094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.666235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.666263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.666381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.666407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.666547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.666573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.666770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.666796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.666945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.666975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.667144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.667169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.667314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.667342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.667458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.667501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.667612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.667640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.667853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.667882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.668055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.668082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.668223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.668249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.017 qpair failed and we were unable to recover it. 00:36:06.017 [2024-10-01 01:53:45.668403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.017 [2024-10-01 01:53:45.668428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.668543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.668569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.668743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.668769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.668903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.668928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.669096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.669127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.669318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.669344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.669509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.669534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.669646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.669688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.669831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.669859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.670009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.670040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.670209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.670234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.670366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.670395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.670563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.670589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.670755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.670798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.670921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.670970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.671143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.671172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.671326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.671354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.671484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.671526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.671661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.671690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.671831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.671858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.672007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.672035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.672146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.672171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.672303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.672344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.672473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.672502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.672628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.672653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.672792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.672817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.673009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.673038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.673222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.673248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.673456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.673481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.673615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.673640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.673748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.673774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.673884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.673909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.674161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.674201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.674370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.674398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.674524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.674551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.674687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.674714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.674895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.674921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.675067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.018 [2024-10-01 01:53:45.675094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.018 qpair failed and we were unable to recover it. 00:36:06.018 [2024-10-01 01:53:45.675234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.675261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.675379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.675405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.675524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.675551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.675720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.675750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.675906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.675934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.676129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.676159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.676339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.676369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.676525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.676553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.676738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.676768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.676994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.677030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.677188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.677214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.677439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.677468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.677652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.677682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.677810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.677836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.677975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.678010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.678150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.678177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.678290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.678322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.678466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.678492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.678657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.678683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.678831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.678858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.679008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.679035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.679177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.679203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.679372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.679398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.679562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.679592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.679768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.679797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.679928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.679973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.680125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.680164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.680297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.680352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.680518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.680545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.680705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.680733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.680906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.680932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.681079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.681107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.681275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.681345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.681629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.681681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.681834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.681860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.681972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.682002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.019 [2024-10-01 01:53:45.682261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.019 [2024-10-01 01:53:45.682328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.019 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.682458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.682484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.682620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.682646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.682840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.682872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.683059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.683087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.683226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.683252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.683373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.683400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.683575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.683606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.683764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.683793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.683941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.683970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.684142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.684169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.684275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.684301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.684525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.684554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.684716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.684742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.684881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.684925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.685066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.685093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.685209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.685235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.685429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.685497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.685621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.685650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.685778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.685804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.685946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.685971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.686162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.686190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.686295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.686321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.686490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.686516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.686653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.686680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.686911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.686941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.687084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.687110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.687273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.687299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.687463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.687490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.687608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.687634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.687854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.687895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.688066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.688093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.688256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.688282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.688431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.688461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.688599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.688626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.688766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.688793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.688956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.688982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.689123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.020 [2024-10-01 01:53:45.689148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.020 qpair failed and we were unable to recover it. 00:36:06.020 [2024-10-01 01:53:45.689334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.689363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.689520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.689547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.689710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.689736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.689891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.689919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.690112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.690139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.690258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.690284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.690431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.690457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.690602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.690631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.690772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.690799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.690939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.690969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.691137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.691176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.691345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.691372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.691524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.691580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.691701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.691730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.691906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.691935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.692135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.692162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.692301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.692326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.692460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.692485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.692617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.692643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.692794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.692835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.693004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.693030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.693166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.693192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.693369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.693395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.693540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.693565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.693730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.693774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.693951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.693982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.694153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.694179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.694292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.694336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.694579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.694630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.694782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.694809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.694971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.695020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.695156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.695183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.695322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.695348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.021 qpair failed and we were unable to recover it. 00:36:06.021 [2024-10-01 01:53:45.695486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.021 [2024-10-01 01:53:45.695512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.695641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.695667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.695772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.695798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.695930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.695965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.696146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.696185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.696306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.696333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.696469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.696495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.696609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.696635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.696806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.696831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.696952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.696976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.697126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.697153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.697290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.697316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.697466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.697495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.697684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.697709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.697835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.697861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.698913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.698937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.699115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.699141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.699309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.699335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.699486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.699514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.699642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.699670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.699884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.699912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.700066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.700093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.700224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.700249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.700391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.700416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.700529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.700559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.700719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.700745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.700860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.700886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.701035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.701061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.701165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.701190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.701360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.701386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.701533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.701561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.022 qpair failed and we were unable to recover it. 00:36:06.022 [2024-10-01 01:53:45.701708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.022 [2024-10-01 01:53:45.701735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.701915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.701942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.702062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.702088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.702220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.702246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.702390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.702416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.702577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.702602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.702770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.702795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.702962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.702988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.703138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.703164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.703342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.703370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.703547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.703573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.703726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.703754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.703911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.703936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.704079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.704104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.704217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.704243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.704383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.704411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.704580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.704605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.704745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.704771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.704873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.704898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.705046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.705072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.705259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.705299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.705421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.705448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.705608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.705633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.705767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.705811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.705950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.705978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.706116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.706140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.706283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.706310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.706467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.706494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.706628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.706654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.706793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.706818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.706977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.707010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.707146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.707172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.707292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.707317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.707420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.707446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.707586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.707612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.707796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.707824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.707995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.708046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.708208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.023 [2024-10-01 01:53:45.708236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.023 qpair failed and we were unable to recover it. 00:36:06.023 [2024-10-01 01:53:45.708398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.708424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.708566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.708592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.708709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.708735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.708862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.708890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.709096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.709123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.709287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.709313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.709469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.709496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.709672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.709700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.709855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.709880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.710057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.710087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.710201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.710226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.710360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.710385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.710578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.710606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.710764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.710789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.710952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.710977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.711100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.711126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.711259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.711302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.711455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.711480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.711620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.711663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.711829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.711855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.711988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.712026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.712166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.712191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.712315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.712343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.712501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.712526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.712670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.712696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.712834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.712859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.713929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.713955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.714102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.714128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.714317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.714344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.714483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.714509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.714648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.714673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.024 [2024-10-01 01:53:45.714811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.024 [2024-10-01 01:53:45.714836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.024 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.714972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.715012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.715146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.715171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.715292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.715319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.715473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.715499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.715660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.715702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.715850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.715878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.716032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.716058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.716172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.716198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.716363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.716391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.716550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.716575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.716759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.716787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.716934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.716961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.717105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.717134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.717272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.717297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.717442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.717466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.717642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.717667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.717791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.717833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.717954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.717981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.718178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.718202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.718347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.718376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.718554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.718582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.718737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.718761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.718945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.718972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.719132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.719171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.719324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.719353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.719496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.719522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.719709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.719738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.719870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.719896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.720065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.720092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.720206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.720233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.720346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.720373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.720552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.720582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.025 [2024-10-01 01:53:45.720699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.025 [2024-10-01 01:53:45.720728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.025 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.720878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.720905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.721043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.721070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.721211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.721237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.721390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.721416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.721570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.721600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.721747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.721776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.721964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.721991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.722137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.722163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.722322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.722351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.722505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.722531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.722695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.722724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.722875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.722904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.723060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.723086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.723227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.723253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.723363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.723389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.723519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.723545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.723731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.723760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.723907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.723935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.724087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.724113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.724236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.724266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.724398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.724424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.724562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.724588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.724700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.724726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.724883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.724909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.725017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.725044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.725184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.725210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.725378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.725407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.725543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.725569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.725705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.725730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.725911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.725954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.726108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.726137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.726283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.726308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.726434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.726462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.726650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.726676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.726784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.726809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.726971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.727007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.727124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.026 [2024-10-01 01:53:45.727150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.026 qpair failed and we were unable to recover it. 00:36:06.026 [2024-10-01 01:53:45.727291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.727316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.727457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.727485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.727612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.727637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.727774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.727799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.727950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.727979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.728114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.728139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.728257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.728282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.728443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.728468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.728617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.728642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.728779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.728808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.729020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.729060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.729206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.729234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.729424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.729453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.729601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.729630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.729787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.729813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.729980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.730016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.730202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.730228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.730361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.730387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.730494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.730520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.730713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.730741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.730895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.730921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.731064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.731090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.731237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.731263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.731443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.731470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.731609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.731651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.731798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.731827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.731957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.731983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.732100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.732127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.732264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.732309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.732477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.732503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.732673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.732702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.732817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.732846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.732994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.733032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.733166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.733192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.733333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.733359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.733496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.733523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.733716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.733745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.027 qpair failed and we were unable to recover it. 00:36:06.027 [2024-10-01 01:53:45.733891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.027 [2024-10-01 01:53:45.733920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.734767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.734802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.735004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.735032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.735164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.735191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.735300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.735326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.735488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.735531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.735686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.735712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.735825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.735851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.736007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.736033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.736148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.736174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.736310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.736336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.736452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.736494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.736647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.736680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.736841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.736868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.737012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.737040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.737196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.737222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.737362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.737388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.737497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.737524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.737642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.737668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.737788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.737814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.738051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.738213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.738377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.738523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.738681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.738864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.738985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.739156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.739293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.739457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.739602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.739765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.739910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.739935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.740084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.740112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.740226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.740252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.740387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.740413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.740583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.740613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.028 [2024-10-01 01:53:45.740746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.028 [2024-10-01 01:53:45.740773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.028 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.740912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.740939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.741109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.741150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.741292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.741319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.741480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.741509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.741636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.741664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.741898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.741947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.742117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.742144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.742259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.742284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.742403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.742428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.742572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.742597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.742741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.742769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.742891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.742917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.743061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.743087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.743204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.743229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.743340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.743366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.743486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.743511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.743655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.743686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.743848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.743874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.744032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.744059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.744188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.744214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.744327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.744353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.744474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.744500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.744680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.744711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.744852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.744877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.745030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.745056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.745172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.745198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.745307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.745332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.745444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.745471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.745648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.745675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.745810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.745836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.746024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.746053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.746208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.746234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.746372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.746398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.746505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.746532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.746672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.746699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.746862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.746890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.747070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.029 [2024-10-01 01:53:45.747096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.029 qpair failed and we were unable to recover it. 00:36:06.029 [2024-10-01 01:53:45.747202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.747227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.747404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.747430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.747558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.747585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.747745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.747772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.747910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.747936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.748063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.748090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.748200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.748226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.748374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.748399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.748585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.748614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.748737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.748783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.748911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.748936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.749052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.749078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.749195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.749220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.749338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.749363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.749506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.749551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.749704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.749732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.749902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.749930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.750087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.750112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.750228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.750256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.750415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.750441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.750598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.750624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.750811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.750840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.750973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.751018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.751135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.751161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.751293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.751322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.751480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.751506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.751658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.751685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.751878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.751907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.752072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.752099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.752215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.752239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.752395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.752423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.752578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.752608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.752750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.752792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.030 qpair failed and we were unable to recover it. 00:36:06.030 [2024-10-01 01:53:45.752941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.030 [2024-10-01 01:53:45.752972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.753118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.753144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.753255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.753282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.753434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.753463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.753627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.753653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.753819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.753845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.753974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.754008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.754170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.754196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.754358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.754387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.754533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.754563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.754720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.754747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.754908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.754937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.755107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.755133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.755237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.755263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.755408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.755434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.755595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.755638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.755791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.755817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.756034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.756184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.756319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.756460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.756679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.756865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.756977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.757011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.757124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.757150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.757308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.757348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.757547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.757593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.757779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.757809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.757961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.757987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.758127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.758154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.758319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.758365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.758528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.758571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.758727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.758771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.758882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.758910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.759055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.759101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.759224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.759251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.759391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.759417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.031 [2024-10-01 01:53:45.759600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.031 [2024-10-01 01:53:45.759644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.031 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.759795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.759827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.760006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.760033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.760184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.760228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.760391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.760435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.760618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.760661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.760797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.760823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.760939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.760964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.761101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.761144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.761294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.761340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.761504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.761547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.761679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.761704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.761840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.761866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.762903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.762928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.763073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.763197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.763361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.763490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.763682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.763849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.763995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.764032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.764169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.764214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.764410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.764458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.764597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.764622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.764782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.764807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.764973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.765005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.765162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.765206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.765371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.765397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.765604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.765649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.765807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.765833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.765953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.765994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.766205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.766238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.766392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.766422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.766579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.032 [2024-10-01 01:53:45.766608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.032 qpair failed and we were unable to recover it. 00:36:06.032 [2024-10-01 01:53:45.766853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.766900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.767091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.767121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.767285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.767315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.767462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.767505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.767690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.767734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.767869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.767894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.768048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.768075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.768230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.768279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.768409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.768453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.768649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.768693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.768808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.768834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.768969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.768995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.769133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.769177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.769303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.769349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.769516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.769542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.769680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.769705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.769810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.769836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.770005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.770046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.770198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.770227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.770384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.770414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.770566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.770596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.770746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.770775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.770898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.770923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.771103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.771130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.771323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.771352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.771475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.771503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.771653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.771682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.771839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.771866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.772005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.772038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.772178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.772204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.772342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.772367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.772531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.772557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.772708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.772737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.772897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.772926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.773070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.773096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.773226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.773253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.773410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.773440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.773639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.773668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.773918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.773947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.774117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.033 [2024-10-01 01:53:45.774143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.033 qpair failed and we were unable to recover it. 00:36:06.033 [2024-10-01 01:53:45.774324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.774353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.774517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.774545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.774740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.774770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.774902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.774928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.775060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.775087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.775225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.775251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.775393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.775422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.775571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.775600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.775722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.775751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.775886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.775912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.776053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.776080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.776215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.776240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.776395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.776438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.776650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.776679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.776805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.776834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.777006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.777034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.777197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.777224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.777372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.777400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.777576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.777604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.777814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.777843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.777993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.778027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.778181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.778207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.778324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.778350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.778457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.778482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.778613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.778643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.778809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.778837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.778979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.779014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.779152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.779177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.779285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.779316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.779452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.779494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.779667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.779696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.779842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.779870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.780047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.780074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.780222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.780248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.780383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.780410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.034 [2024-10-01 01:53:45.780576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.034 [2024-10-01 01:53:45.780620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.034 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.780733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.780762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.780926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.780952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.781070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.781097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.781245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.781271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.781414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.781441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.781576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.781602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.781785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.781811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.781954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.781979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.782155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.782185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.782331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.782358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.782499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.782525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.782636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.782662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.782820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.782849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.783007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.783033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.783170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.783213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.783361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.783390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.783540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.783566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.783677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.783702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.783867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.783894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.784064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.784091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.784211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.784237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.784370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.784397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.784536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.784561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.784674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.784719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.784901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.784927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.785091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.785118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.785232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.785258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.035 [2024-10-01 01:53:45.785394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.035 [2024-10-01 01:53:45.785420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.035 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.785581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.785607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.785792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.785821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.786007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.786033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.786173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.786198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.786359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.786392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.786568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.786597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.786761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.786787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.786902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.786929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.787074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.787101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.787270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.787296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.787432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.787458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.787571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.787597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.787731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.787757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.787921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.787947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.788102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.788128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.788264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.788290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.788472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.788501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.788642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.788689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.788829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.788855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.788993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.789047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.789224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.789249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.789382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.789409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.789527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.789554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.789708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.789738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.789892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.789920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.790112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.790138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.790255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.790297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.790451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.790477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.790656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.790684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.790800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.790829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.790988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.791020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.791161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.791205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.791394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.791420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.791559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.791584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.791741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.791769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.791920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.791950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.036 [2024-10-01 01:53:45.792125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.036 [2024-10-01 01:53:45.792152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.036 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.792260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.792286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.792481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.792506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.792635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.792661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.792803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.792828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.793042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.793191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.793336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.793500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.793692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.793877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.793991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.794026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.794155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.794181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.794321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.794346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.794534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.794562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.794718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.794744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.794856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.794882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.795034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.795061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.795231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.795258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.795395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.795421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.795533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.795560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.795694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.795720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.795854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.795896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.796097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.796124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.796262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.796289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.796406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.796432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.796592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.796618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.796758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.796784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.796926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.796952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.797126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.797152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.797290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.797316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.797428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.797454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.797557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.797582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.797689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.797715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.797852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.797878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.798073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.798103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.798264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.798290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.037 [2024-10-01 01:53:45.798429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.037 [2024-10-01 01:53:45.798473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.037 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.798629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.798655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.798818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.798844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.798978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.799009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.799128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.799154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.799289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.799315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.799478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.799507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.799644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.799673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.799834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.799862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.800054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.800080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.800213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.800239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.800381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.800411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.800551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.800578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.800714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.800740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.800865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.800891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.801047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.801077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.801225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.801254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.801440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.801466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.801656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.801684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.801804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.801833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.801971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.802002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.802167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.802193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.802327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.802357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.802538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.802565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.802721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.802749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.802870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.802900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.803055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.803082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.803249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.803278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.803426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.803455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.803615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.803640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.803748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.803773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.803961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.803990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.804130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.804156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.804305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.804331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.804481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.804509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.804674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.804700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.804813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.804839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.038 [2024-10-01 01:53:45.804944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.038 [2024-10-01 01:53:45.804970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.038 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.805087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.805114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.805280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.805305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.805499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.805528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.805661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.805687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.805826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.805852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.806029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.806056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.806163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.806189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.806326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.806352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.806513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.806542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.806726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.806752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.806908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.806937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.807090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.807119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.807306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.807331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.807491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.807526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.807703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.807732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.807898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.807924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.808110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.808139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.808281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.808328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.808542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.808569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1063993 Killed "${NVMF_APP[@]}" "$@" 00:36:06.039 [2024-10-01 01:53:45.808760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.808788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.808904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.808945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:06.039 [2024-10-01 01:53:45.809056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.809084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:06.039 [2024-10-01 01:53:45.809235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.809278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:06.039 [2024-10-01 01:53:45.809428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.809457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:06.039 [2024-10-01 01:53:45.809590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.809620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.039 [2024-10-01 01:53:45.809757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.809783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.809891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.809919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.810067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.810094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.810235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.039 [2024-10-01 01:53:45.810262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.039 qpair failed and we were unable to recover it. 00:36:06.039 [2024-10-01 01:53:45.810409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.810435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.810571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.810597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.810765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.810794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.810943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.810972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.811141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.811167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.811299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.811342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.811486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.811526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.811654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.811680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.811835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.811893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.812059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.812085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.812212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.812238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.812380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.812406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.812569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.812598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.812732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.812758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.812892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.812917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.813087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.813114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=1064550 00:36:06.040 [2024-10-01 01:53:45.813246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:06.040 [2024-10-01 01:53:45.813276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 1064550 00:36:06.040 [2024-10-01 01:53:45.813427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.813455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1064550 ']' 00:36:06.040 [2024-10-01 01:53:45.813587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.813614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:06.040 [2024-10-01 01:53:45.813794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.813835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.040 [2024-10-01 01:53:45.813944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.813971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:06.040 [2024-10-01 01:53:45.814098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.814124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 01:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.040 [2024-10-01 01:53:45.814229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.814256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.814777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.814810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.814994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.815029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.815165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.815194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.815334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.815371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.815614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.815643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.815842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.815868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.040 [2024-10-01 01:53:45.816039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.040 [2024-10-01 01:53:45.816069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.040 qpair failed and we were unable to recover it. 00:36:06.327 [2024-10-01 01:53:45.816240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.327 [2024-10-01 01:53:45.816272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.327 qpair failed and we were unable to recover it. 00:36:06.327 [2024-10-01 01:53:45.816412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.327 [2024-10-01 01:53:45.816439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.327 qpair failed and we were unable to recover it. 00:36:06.327 [2024-10-01 01:53:45.816557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.327 [2024-10-01 01:53:45.816587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.327 qpair failed and we were unable to recover it. 00:36:06.327 [2024-10-01 01:53:45.816696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.327 [2024-10-01 01:53:45.816724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.327 qpair failed and we were unable to recover it. 00:36:06.327 [2024-10-01 01:53:45.816894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.327 [2024-10-01 01:53:45.816923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.327 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.817072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.817099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.817237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.817264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.817402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.817428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.817534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.817586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.817699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.817728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.817864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.817900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.818042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.818087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.818244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.818273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.818425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.818451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.818592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.818635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.818761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.818789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.818928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.818954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.819094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.819122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.819232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.819258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.819402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.819428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.819541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.819568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.819714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.819740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.819894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.819920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.820060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.820087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.820247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.820276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.820428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.820454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.820562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.820589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.820765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.820795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.820931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.820958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.821103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.821130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.821290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.821324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.821511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.821537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.821697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.821726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.821872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.821901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.822043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.822070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.822185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.822211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.822360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.822389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.822553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.822579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.822755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.822784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.822942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.822971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.823136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.328 [2024-10-01 01:53:45.823166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.328 qpair failed and we were unable to recover it. 00:36:06.328 [2024-10-01 01:53:45.823273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.823324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.823476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.823505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.823630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.823656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.823820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.823863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.824035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.824063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.824166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.824193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.824357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.824409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.824532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.824562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.824712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.824739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.824857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.824884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.825017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.825062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.825212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.825239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.825386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.825428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.825575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.825617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.825756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.825782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.825896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.825940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.826095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.826122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.826260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.826286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.826423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.826450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.826598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.826641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.826776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.826819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.827015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.827060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.827173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.827199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.827364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.827389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.827522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.827559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.827722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.827752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.827890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.827917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.828038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.828065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.828200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.828243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.828422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.828475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.828624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.828654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.828811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.828837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.828952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.828978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.829092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.829118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.829231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.829257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.829392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.829418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.329 [2024-10-01 01:53:45.829536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.329 [2024-10-01 01:53:45.829562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.329 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.829704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.829730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.829894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.829923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.830064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.830095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.830214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.830240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.830459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.830501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.830654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.830682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.830898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.830926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.831062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.831089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.831212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.831238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.831473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.831499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.831609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.831636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.831771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.831797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.831907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.831932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.832076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.832106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.832258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.832285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.832445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.832488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.832612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.832642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.832794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.832824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.832973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.833144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.833305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.833459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.833618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.833779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.833926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.833954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.834124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.834264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.834401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.834543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.834681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.834861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.834989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.835026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.835163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.835188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.835320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.835348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.835476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.835502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.835643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.835668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.835903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.330 [2024-10-01 01:53:45.835930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.330 qpair failed and we were unable to recover it. 00:36:06.330 [2024-10-01 01:53:45.836106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.836135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.836313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.836339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.836476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.836520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.836672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.836701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.836853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.836882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.837046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.837077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.837216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.837241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.837379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.837405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.837528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.837557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.837695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.837721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.837938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.837967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.838115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.838144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.838302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.838331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.838490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.838515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.838636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.838663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.838805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.838831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.839008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.839038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.839177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.839203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.839311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.839337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.839480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.839507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.839660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.839691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.839900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.839929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.840072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.840098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.840208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.840234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.840351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.840377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.840482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.840508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.840646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.840672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.840861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.840890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.841107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.841133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.841264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.841290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.841425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.841470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.841594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.841622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.841773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.841802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.841967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.842004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.842141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.842167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.842283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.331 [2024-10-01 01:53:45.842309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.331 qpair failed and we were unable to recover it. 00:36:06.331 [2024-10-01 01:53:45.842430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.842459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.842584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.842609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.842718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.842744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.842903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.842932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.843068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.843098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.843258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.843284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.843422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.843449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.843590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.843616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.843748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.843790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.843897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.843927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.844070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.844098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.844287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.844316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.844479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.844512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.844672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.844697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.844867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.844896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.845067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.845094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.845231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.845257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.845381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.845407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.845542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.845584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.845726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.845755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.845917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.845943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.846056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.846083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.846214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.846240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.846416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.846445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.846565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.846594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.846748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.846773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.846874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.846900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.847073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.847100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.847218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.847244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.847395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.847421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.847588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.847617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.332 qpair failed and we were unable to recover it. 00:36:06.332 [2024-10-01 01:53:45.847839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.332 [2024-10-01 01:53:45.847868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.848040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.848066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.848182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.848209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.848336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.848363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.848498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.848525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.848667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.848697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.848884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.848912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.849083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.849109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.849243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.849269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.849373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.849399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.849538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.849564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.849743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.849772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.849993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.850028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.850154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.850181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.850314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.850339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.850525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.850553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.850690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.850718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.850845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.850873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.851024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.851055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.851197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.851223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.851383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.851408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.851537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.851565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.851734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.851762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.851932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.851960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.852104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.852132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.852279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.852321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.852464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.852490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.852633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.852674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.852819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.852846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.852951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.852977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.853140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.853167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.853272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.853298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.853472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.853498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.853647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.853674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.853876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.853902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.854051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.854079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.854247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.854274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.333 qpair failed and we were unable to recover it. 00:36:06.333 [2024-10-01 01:53:45.854385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.333 [2024-10-01 01:53:45.854411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.854552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.854578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.854675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.854701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.854834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.854860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.854965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.855183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.855318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.855451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.855602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.855764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.855931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.855957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.856099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.856125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.856276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.856302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.856426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.856452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.856601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.856627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.856789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.856816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.856969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.856995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.857156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.857182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.857350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.857376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.857542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.857568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.857709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.857734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.857865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.857895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.858064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.858239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.858367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.858552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.858689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.858850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.858988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.859133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.859275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.859432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.859562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.859695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.859857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.859883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.860026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.860053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.860165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.860191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.334 qpair failed and we were unable to recover it. 00:36:06.334 [2024-10-01 01:53:45.860307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.334 [2024-10-01 01:53:45.860333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.860438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.860464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.860565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.860591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.860727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.860753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.860894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.860921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861049] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:36:06.335 [2024-10-01 01:53:45.861083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861126] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.335 [2024-10-01 01:53:45.861188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.861938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.861964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.862084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.862111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.862249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.862275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.862393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.862419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.862587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.862613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.862746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.862772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.862932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.862958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.863013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1979260 (9): Bad file descriptor 00:36:06.335 [2024-10-01 01:53:45.863180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.863219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.863358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.863385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.863490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.863516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.863633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.863658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.863793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.863819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.863947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.863974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.864109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.864137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.864277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.864304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.864415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.864442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.864553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.864579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.864727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.864752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.864869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.864895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.865108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.865134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.865296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.865322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.865435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.865461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.865571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.865598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.865735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.865761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.865896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.335 [2024-10-01 01:53:45.865921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.335 qpair failed and we were unable to recover it. 00:36:06.335 [2024-10-01 01:53:45.866062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.866094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.866260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.866286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.866426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.866453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.866595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.866621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.866842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.866867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.867900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.867925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.868095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.868121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.868234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.868260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.868376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.868402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.868521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.868549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.868684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.868710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.868821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.868846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.869939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.869979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.870120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.870148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.870289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.870314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.870427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.870454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.870597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.870622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.870731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.870757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.870871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.870897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.336 [2024-10-01 01:53:45.871961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.336 [2024-10-01 01:53:45.871987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.336 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.872119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.872144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.872255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.872281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.872416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.872446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.872585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.872611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.872728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.872755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.872860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.872886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.873938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.873964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.874098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.874139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.874268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.874309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.874432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.874460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.874571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.874598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.874702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.874729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.874842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.874869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.875035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.875063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.875212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.875238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.875387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.875413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.875578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.875604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.875738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.875764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.875901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.875927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.876095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.876255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.876412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.876577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.876721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.876883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.337 [2024-10-01 01:53:45.876987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.337 [2024-10-01 01:53:45.877019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.337 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.877151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.877177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.877283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.877309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.877471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.877498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.877629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.877655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.877763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.877790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.877925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.877950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.878102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.878129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.878238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.878264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.878383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.878409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.878572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.878599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.878739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.878765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.878898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.878925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.879097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.879137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.879261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.879287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.879425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.879451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.879560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.879586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.879717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.879742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.879867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.879906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.880077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.880104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.880214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.880241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.880380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.880407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.880521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.880548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.880688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.880714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.880877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.880902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.881931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.881971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.882135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.882165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.882279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.882305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.882451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.882478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.882595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.882621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.882762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.882788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.882925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.338 [2024-10-01 01:53:45.882952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.338 qpair failed and we were unable to recover it. 00:36:06.338 [2024-10-01 01:53:45.883109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.883253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.883423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.883570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.883707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.883831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.883965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.883990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.884126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.884251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.884440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.884577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.884708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.884879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.884986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.885174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.885349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.885534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.885672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.885805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.885965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.885992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.886170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.886197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.886348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.886373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.886508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.886534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.886675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.886702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.886821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.886848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.886972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.887156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.887319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.887471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.887633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.887775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.887967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.887993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.888146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.888171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.888287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.888313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.888434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.888461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.888564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.888589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.888729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.888754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.339 qpair failed and we were unable to recover it. 00:36:06.339 [2024-10-01 01:53:45.888873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.339 [2024-10-01 01:53:45.888898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.889071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.889202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.889362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.889508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.889676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.889830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.889970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.890002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.890141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.890166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.890314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.890353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.890469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.890498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.890613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.890641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.890806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.890833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.890981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.891031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.891153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.891181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.891353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.891379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.891545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.891571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.891707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.891739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.891853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.891880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.891992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.892022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.892129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.892155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.892291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.892316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.892454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.892478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.892643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.892669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.892846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.892873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.892988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.893024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.893160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.893186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.893333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.893358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.893522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.893548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.893660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.893687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.893823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.893849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.893974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.894149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.894345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.894475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.894634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.894795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.340 [2024-10-01 01:53:45.894936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.340 [2024-10-01 01:53:45.894964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.340 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.895109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.895135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.895239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.895265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.895412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.895439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.895572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.895598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.895770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.895808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.896033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.896061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.896189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.896216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.896349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.896375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.896515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.896541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.896674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.896708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.896858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.896887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.897038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.897077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.897231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.897258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.897411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.897437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.897574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.897600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.897718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.897745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.897863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.897889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.898924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.898951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.899087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.899114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.899288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.899326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.899491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.899517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.899644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.899670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.899785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.899812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.899959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.899985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.900109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.900136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.900247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.900274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.341 [2024-10-01 01:53:45.900413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.341 [2024-10-01 01:53:45.900439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.341 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.900558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.900585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.900722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.900748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.900859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.900885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.901005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.901032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.901168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.901194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.901359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.901388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.901535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.901562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.901703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.901729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.901865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.901891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.902029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.902058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.902196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.902222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.902372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.902399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.902539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.902566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.902709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.902740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.902878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.902905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.903050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.903077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.903185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.903212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.903369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.903395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.903557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.903583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.903715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.903741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.903910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.903939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.904096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.904124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.904261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.904287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.904394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.904421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.904563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.904590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.904732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.904758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.904895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.904921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.905075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.905102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.905210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.905237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.905407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.905433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.905601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.905628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.905767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.905793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.905915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.905954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.906121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.906161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.906335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.906364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.906503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.906531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.342 [2024-10-01 01:53:45.906645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.342 [2024-10-01 01:53:45.906673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.342 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.906816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.906843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.906981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.907160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.907302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.907463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.907595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.907730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.907870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.907896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.908958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.908984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.909132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.909160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.909284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.909332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.909449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.909476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.909620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.909647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.909785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.909811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.909962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.910166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.910326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.910498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.910630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.910798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.910943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.910969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.911123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.911152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.911290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.911316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.911432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.911459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.911618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.911645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.911778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.911804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.911953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.911980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.912126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.912153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.912295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.912321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.912458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.912485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.343 [2024-10-01 01:53:45.912621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.343 [2024-10-01 01:53:45.912648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.343 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.912754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.912780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.912889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.912915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.913043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.913071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.913230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.913257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.913370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.913397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.913501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.913527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.913675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.913702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.913814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.913840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.914924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.914950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.915107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.915134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.915282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.915321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.915467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.915495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.915656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.915682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.915786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.915818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.915952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.915978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.916126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.916155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.916289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.916316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.916423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.916449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.916610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.916636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.916776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.916802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.916909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.916936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.917056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.917083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.917238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.917264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.917445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.917471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.917650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.917676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.917816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.917842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.917980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.918011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.918150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.918176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.918308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.918335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.918484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.918510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.918647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.344 [2024-10-01 01:53:45.918673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.344 qpair failed and we were unable to recover it. 00:36:06.344 [2024-10-01 01:53:45.918842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.918871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.919936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.919962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.920118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.920157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.920276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.920303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.920415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.920442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.920607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.920633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.920750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.920778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.920902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.920928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.921084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.921249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.921397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.921532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.921667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.921794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.921969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.922117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.922292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.922463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.922628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.922769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.922902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.922929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.923067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.923093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.923229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.923255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.923368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.923395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.923557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.923582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.923743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.923769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.923907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.923933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.924043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.924070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.924197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.924224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.924358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.924384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.924492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.345 [2024-10-01 01:53:45.924518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.345 qpair failed and we were unable to recover it. 00:36:06.345 [2024-10-01 01:53:45.924652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.924679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.924815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.924841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.925936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.925964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.926130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.926158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.926293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.926319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.926430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.926456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.926574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.926600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.926742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.926769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.926876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.926903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.927067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.927235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.927402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.927558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.927726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.927888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.927994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.928132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.928296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.928466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.928593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.928740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.928904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.928930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.929048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.929076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.929184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.929211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.929353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.929379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.929513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.346 [2024-10-01 01:53:45.929539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.346 qpair failed and we were unable to recover it. 00:36:06.346 [2024-10-01 01:53:45.929678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.929704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.929813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.929839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.929984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.930038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.930201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.930229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.930367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.930394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.930512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.930538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.930697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.930724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.930864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.930891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.931939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.931966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.932119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.932148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.932289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.932317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.932421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.347 [2024-10-01 01:53:45.932431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.932457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.932592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.932620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.932753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.932779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.932917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.932944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.933959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.933985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.934956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.934984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.935102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.935129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.347 [2024-10-01 01:53:45.935243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.347 [2024-10-01 01:53:45.935271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.347 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.935435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.935461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.935579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.935606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.935744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.935771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.935910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.935937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.936045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.936073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.936184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.936210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.936348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.936375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.936513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.936541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.936682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.936708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.936811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.936843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.937912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.937938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.938048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.938077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.938221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.938248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.938426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.938453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.938585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.938612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.938751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.938777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.938890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.938917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.939043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.939070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.939206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.939233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.939350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.939377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.939578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.939605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.939769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.939794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.939949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.939975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.940147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.940173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.940312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.940338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.940449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.940474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.940591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.940620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.940730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.940756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.940882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.940923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.941072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.941102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.941249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.348 [2024-10-01 01:53:45.941277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.348 qpair failed and we were unable to recover it. 00:36:06.348 [2024-10-01 01:53:45.941384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.941411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.941583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.941611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.941728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.941754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.941891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.941917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.942878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.942906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.943044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.943071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.943177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.943209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.943350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.943377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.943522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.943548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.943700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.943727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.943877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.943904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.944076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.944205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.944368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.944534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.944676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.944840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.944976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.945130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.945320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.945469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.945627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.945791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.945953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.945979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.946122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.946162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.946332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.946360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.946474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.946501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.946638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.946665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.946787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.946814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.946954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.946987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.947131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.349 [2024-10-01 01:53:45.947157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.349 qpair failed and we were unable to recover it. 00:36:06.349 [2024-10-01 01:53:45.947301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.947328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.947466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.947493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.947605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.947632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.947768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.947794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.947901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.947928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.948080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.948106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.948274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.948303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.948444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.948471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.948590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.948617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.948753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.948779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.948890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.948917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.949031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.949058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.949198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.949224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.949370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.949398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.949505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.949532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.949666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.949698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.949834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.949861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.950927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.950953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.951064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.951102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.951266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.951298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.951432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.951459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.951624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.951651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.951815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.951841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.951985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.952199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.952329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.952490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.952627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.952759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.952923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.952949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.953122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.350 [2024-10-01 01:53:45.953151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.350 qpair failed and we were unable to recover it. 00:36:06.350 [2024-10-01 01:53:45.953259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.953285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.953421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.953447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.953596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.953622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.953755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.953781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.953889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.953916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.954938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.954965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.955110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.955136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.955260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.955303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.955418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.955445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.955609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.955636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.955745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.955772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.955908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.955934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.956071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.956104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.956241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.956268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.956379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.956405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.956541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.956567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.956671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.956697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.956832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.956859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.957943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.957971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.958122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.958150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.958293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.351 [2024-10-01 01:53:45.958320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.351 qpair failed and we were unable to recover it. 00:36:06.351 [2024-10-01 01:53:45.958455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.958481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.958637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.958664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.958804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.958831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.958966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.958993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.959134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.959161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.959294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.959320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.959487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.959514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.959634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.959673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.959814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.959841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.960007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.960034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.960189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.960216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.960347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.960374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.960514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.960540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.960683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.960711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.960854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.960882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.961905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.961932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.962065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.962227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.962359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.962524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.962692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.962829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.962995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.963137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.963272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.963435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.963572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.963764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.963926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.963953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.964106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.964136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.964277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.352 [2024-10-01 01:53:45.964304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.352 qpair failed and we were unable to recover it. 00:36:06.352 [2024-10-01 01:53:45.964439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.964466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.964575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.964602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.964747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.964774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.964914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.964941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.965050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.965079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.965213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.965239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.965373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.965400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.965571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.965598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.965730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.965756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.965893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.965920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.966950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.966976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.967146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.967172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.967309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.967335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.967474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.967500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.967635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.967660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.967822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.967847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.968946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.968972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.969145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.969172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.969279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.969304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.969413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.969440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.969593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.969619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.969770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.969810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.969945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.969984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.970113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.970143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.970307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.970334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.353 qpair failed and we were unable to recover it. 00:36:06.353 [2024-10-01 01:53:45.970470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.353 [2024-10-01 01:53:45.970497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.970635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.970662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.970781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.970808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.970948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.970974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.971120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.971148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.971294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.971321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.971481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.971508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.971649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.971675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.971787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.971814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.971953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.971980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.972133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.972265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.972425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.972581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.972715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.972844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.972978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.973113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.973273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.973467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.973604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.973732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.973918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.973944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.974097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.974124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.974288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.974314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.974427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.974453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.974627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.974653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.974784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.974809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.974917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.974945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.975114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.975142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.975247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.975274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.975403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.975429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.975595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.975622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.975781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.975807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.975935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.975961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.976103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.976130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.976264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.976291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-10-01 01:53:45.976396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.354 [2024-10-01 01:53:45.976422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.976540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.976567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.976707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.976736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.976875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.976901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.977063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.977224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.977388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.977530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.977692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.977858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.977976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.978119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.978301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.978458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.978647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.978779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.978951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.978978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.979131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.979157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.979267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.979305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.979420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.979448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.979613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.979640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.979776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.979808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.979921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.979948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.980086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.980113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.980263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.980290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.980415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.980441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.980543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.980569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.980684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.980709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.980863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.980889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.981015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.981044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.981180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.981207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.981353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.981380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.981522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.355 [2024-10-01 01:53:45.981548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-10-01 01:53:45.981656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.981683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.981794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.981821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.981962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.981988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.982130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.982157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.982275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.982303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.982436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.982462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.982602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.982629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.982772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.982799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.982930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.982956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.983087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.983115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.983233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.983261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.983505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.983531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.983669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.983696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.983803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.983830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.983952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.983978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.984130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.984157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.984266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.984293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.984457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.984483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.984643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.984669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.984805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.984831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.984940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.984967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.985079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.985107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.985237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.985263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.985401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.985428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.985562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.985589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.985725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.985751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.985913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.985940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.986050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.986078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.986201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.986234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.986404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.986433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.986572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.986598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.986719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.986746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.986851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.986878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.987042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.987069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.987185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.987210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.987378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.987404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.987564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.987591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-10-01 01:53:45.987747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.356 [2024-10-01 01:53:45.987773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.987886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.987914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.988919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.988946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.989077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.989105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.989243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.989270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.989406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.989432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.989603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.989629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.989789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.989816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.989973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.990174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.990332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.990463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.990627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.990822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.990959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.990992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.991144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.991170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.991305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.991332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.991445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.991472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.991645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.991671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.991783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.991811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.991977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.992017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.992139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.992165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.992305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.992331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.992450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.992475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.992619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.992645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.992807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.992838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.992984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.993018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.993187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.993213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.993377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.993403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.993522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.993548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.993704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.993730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.357 [2024-10-01 01:53:45.993862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.357 [2024-10-01 01:53:45.993888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.357 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.994905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.994930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.995110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.995254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.995395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.995563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.995730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.995867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.995973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.996141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.996286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.996431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.996596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.996730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.996863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.996889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.997027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.997054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.997202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.997229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.997398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.997425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.997564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.997590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.997704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.997731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.997868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.997894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.998943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.998969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.999096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.999128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.999244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.999270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.999401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.999428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.358 [2024-10-01 01:53:45.999592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.358 [2024-10-01 01:53:45.999618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.358 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:45.999723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:45.999748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:45.999902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:45.999943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.000099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.000130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.000243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.000270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.000434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.000461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.000574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.000601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.000721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.000748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.000862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.000889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.001944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.001971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.002141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.002168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.002304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.002332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.002444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.002471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.002578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.002605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.002757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.002784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.002918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.002945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.003086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.003114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.003226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.003253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.003423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.003450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.003594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.003620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.003756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.003783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.003917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.003943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.004053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.004081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.004223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.004250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.004387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.004414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.004550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.004576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.004698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.004724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.004845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.004872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.005013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.005040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.005171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.005198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.005318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.005345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.359 [2024-10-01 01:53:46.005457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.359 [2024-10-01 01:53:46.005489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.359 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.005625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.005653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.005757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.005783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.005896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.005923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.006955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.006984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.007120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.007147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.007288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.007314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.007449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.007476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.007621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.007648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.007762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.007790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.007927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.007955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.008099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.008126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.008263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.008290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.008425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.008451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.008558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.008583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.008719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.008745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.008879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.008904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.009041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.009068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.009208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.009235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.009372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.009399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.009532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.009558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.009731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.009757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.009894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.009920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.010057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.010084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.010251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.010277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.010390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.360 [2024-10-01 01:53:46.010415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.360 qpair failed and we were unable to recover it. 00:36:06.360 [2024-10-01 01:53:46.010562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.010588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.010728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.010754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.010875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.010902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.011065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.011207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.011334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.011493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.011627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.011796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.011969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.012881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.012988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.013162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.013287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.013420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.013562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.013724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.013893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.013919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.014939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.014965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.015084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.015111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.015266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.015291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.015404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.015430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.015596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.015622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.015743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.015769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.015918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.015958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.361 [2024-10-01 01:53:46.016104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.361 [2024-10-01 01:53:46.016133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.361 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.016288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.016316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.016449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.016475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.016611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.016637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.016747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.016774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.016913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.016940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.017063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.017092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.017219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.017247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.017421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.017448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.017563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.017590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.017721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.017747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.017897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.017924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.018037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.018071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.018234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.018260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.018394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.018422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.018557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.018584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.018704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.018731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.018863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.018890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.019078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.019270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.019422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.019612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.019740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.019873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.019985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.020155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.020300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.020438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.020579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.020768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.020950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.020976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.021150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.021188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.021307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.021335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.021448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.021476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.021643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.021670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.021812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.021839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.021955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.021981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.362 [2024-10-01 01:53:46.022128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.362 [2024-10-01 01:53:46.022154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.362 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.022282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.022309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.022419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.022446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.022595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.022621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.022765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.022792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.022905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.022931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.023937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.023964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.024956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.024993] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.363 [2024-10-01 01:53:46.025035] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.363 [2024-10-01 01:53:46.025050] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.363 [2024-10-01 01:53:46.025062] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.363 [2024-10-01 01:53:46.025072] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.363 [2024-10-01 01:53:46.025070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.025096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.025235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.025260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.025405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.025322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:06.363 [2024-10-01 01:53:46.025433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.025378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:06.363 [2024-10-01 01:53:46.025405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:36:06.363 [2024-10-01 01:53:46.025408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:06.363 [2024-10-01 01:53:46.025542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.025568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.025709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.025737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.025850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.025876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.025990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.026145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.026294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.026437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.026597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.026780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.026941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.026967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.027076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.027102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.363 [2024-10-01 01:53:46.027212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.363 [2024-10-01 01:53:46.027238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.363 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.027340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.027366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.027499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.027525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.027636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.027662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.027777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.027802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.027949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.027989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.028159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.028187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.028301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.028330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.028438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.028466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.028574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.028601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.028739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.028765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.028871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.028897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.029035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.029173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.029336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.029503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.029637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.029797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.029978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.030867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.030977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.031121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.031287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.031450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.031587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.031757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.031967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.031993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.032122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.032149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.032267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.032293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.032430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.032457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.032591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.032619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.032742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.032769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.364 qpair failed and we were unable to recover it. 00:36:06.364 [2024-10-01 01:53:46.032948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.364 [2024-10-01 01:53:46.032975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.033091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.033119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.033255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.033282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.033436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.033462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.033572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.033600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.033735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.033761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.033866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.033893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.034909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.034935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.035081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.035109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.035225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.035252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.035388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.035414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.035529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.035556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.035722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.035748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.035881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.035907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.036045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.036246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.036417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.036561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.036725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.036865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.036976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.037152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.037285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.037426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.037570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.037711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.037877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.365 [2024-10-01 01:53:46.037905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.365 qpair failed and we were unable to recover it. 00:36:06.365 [2024-10-01 01:53:46.038049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.038076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.038189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.038216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.038361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.038388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.038526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.038552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.038663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.038690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.038836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.038864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.038976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.039875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.039981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.040129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.040325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.040480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.040651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.040783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.040951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.040978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.041134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.041171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.041300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.041331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.041449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.041478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.041605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.041634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.041779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.041809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.041918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.041945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.042136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.042301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.042438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.042572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.042715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.042854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.042989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.043024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.043154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.043181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.043308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.043335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.043453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.043480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.043614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.043641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.366 qpair failed and we were unable to recover it. 00:36:06.366 [2024-10-01 01:53:46.043784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.366 [2024-10-01 01:53:46.043811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.043926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.043965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.044968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.044995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.045142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.045169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.045288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.045314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.045424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.045451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.045566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.045593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.045761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.045788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.045950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.045977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.046107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.046142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.046258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.046286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.046404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.046432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.046547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.046575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.046689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.046716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.046873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.046914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.047054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.047084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.047222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.047251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.047405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.047431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.047576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.047602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.047706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.047733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.047870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.047910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.048066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.048095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.048233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.048260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.048411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.048439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.048549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.048577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.048680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.048713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.048866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.048893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.049042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.049070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.049174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.049201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.049314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.049341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.049461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.049488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.367 qpair failed and we were unable to recover it. 00:36:06.367 [2024-10-01 01:53:46.049652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.367 [2024-10-01 01:53:46.049679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.049790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.049818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.049930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.049957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.050114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.050141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.050249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.050276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.050393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.050420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.050558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.050586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.050732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.050759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.050906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.050933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.051946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.051972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.052104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.052144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.052326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.052356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.052461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.052488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.052623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.052650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.052801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.052828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.052937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.052965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.053934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.053961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.054140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.054167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.054283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.054310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.054445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.054472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.054616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.054642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.054751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.368 [2024-10-01 01:53:46.054778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.368 qpair failed and we were unable to recover it. 00:36:06.368 [2024-10-01 01:53:46.054887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.054923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.055921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.055947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.056955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.056991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.057144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.057172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.057338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.057365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.057494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.057520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.057628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.057654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.057774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.057801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.057949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.057977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.058135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.058305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.058439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.058597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.058737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.058880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.058985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.369 qpair failed and we were unable to recover it. 00:36:06.369 [2024-10-01 01:53:46.059965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.369 [2024-10-01 01:53:46.059993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.060961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.060988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.061138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.061166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.061290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.061317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.061428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.061456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.061572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.061599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.061713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.061739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.061838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.061864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.062909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.062938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.063932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.063961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.064102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.064130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.064242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.064268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.064409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.064437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.064541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.064569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.064684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.064711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.064850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.064881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.065002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.065032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.065140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.065168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.065281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.065309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.370 qpair failed and we were unable to recover it. 00:36:06.370 [2024-10-01 01:53:46.065491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.370 [2024-10-01 01:53:46.065518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.065627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.065654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.065789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.065817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.065923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.065950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.066060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.066089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.066264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.066291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.066427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.066454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.066556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.066582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.066721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.066748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.066863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.066902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.067867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.067897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.068875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.068902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.069875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.069902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.070035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.070184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.070313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.070441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.070616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.371 [2024-10-01 01:53:46.070771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.371 qpair failed and we were unable to recover it. 00:36:06.371 [2024-10-01 01:53:46.070881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.070908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.071073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.071259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.071420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.071553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.071695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.071843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.071985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.072947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.072974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.073100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.073128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.073269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.073296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.073403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.073430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.073546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.073573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.073690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.073717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.073865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.073892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.074932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.074958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.075109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.075136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.075248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.075275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.075380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.075407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.075586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.075613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.075716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.075743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.075842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.075868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.076006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.076034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.076142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.076169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.076320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.372 [2024-10-01 01:53:46.076361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.372 qpair failed and we were unable to recover it. 00:36:06.372 [2024-10-01 01:53:46.076509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.076538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.076678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.076705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.076871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.076904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.077056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.077084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.077218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.077245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.077366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.077394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.077564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.077591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.077693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.077721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.077890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.077918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.078944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.078971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.079092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.079119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.079236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.079262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.079384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.079412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.079577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.079605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.079716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.079747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.079860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.079888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.080891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.080917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.081046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.081076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.081183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.081211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.081316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.081344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.373 [2024-10-01 01:53:46.081479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.373 [2024-10-01 01:53:46.081505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.373 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.081610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.081637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.081743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.081770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.081881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.081907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.082045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.082213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.082343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.082510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.082679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.082844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.082991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.083031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.083142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.083169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.083336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.083362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.083528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.083555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.083691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.083717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.083828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.083854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.083991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.084025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.084209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.084236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.084339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.084365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.084476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.084503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.084637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.084664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.084788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.084828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.084977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.085139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.085285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.085424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.085584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.085717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.085915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.085942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.086922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.086949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.087056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.087083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.374 qpair failed and we were unable to recover it. 00:36:06.374 [2024-10-01 01:53:46.087202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.374 [2024-10-01 01:53:46.087229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.087389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.087416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.087522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.087549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.087662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.087689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.087820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.087849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.087969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.088165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.088314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.088456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.088602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.088741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.088920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.088960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.089091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.089121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.089252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.089286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.089435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.089462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.089575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.089602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.089745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.089771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.089885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.089912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.090919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.090946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.091098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.375 [2024-10-01 01:53:46.091127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.375 qpair failed and we were unable to recover it. 00:36:06.375 [2024-10-01 01:53:46.091265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.091293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.091424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.091453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.091595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.091622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.091731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.091758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.091899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.091926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.092894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.092920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.093065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.093205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.093377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.093552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.093701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.093876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.093974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.094125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.094288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.094422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.094582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.094750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.094878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.094909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.095924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.095950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.096913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.096939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.097061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.097100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.097220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.097249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.097402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.097430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.097537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.097565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.376 [2024-10-01 01:53:46.097704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.376 [2024-10-01 01:53:46.097731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.376 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.097868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.097896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.098860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.098995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.099134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.099270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.099453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.099621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.099759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.099888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.099914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.100942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.100968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.101121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.101147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.101284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.101321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.101432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.101458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.101597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.101623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.101789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.101815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.101935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.101974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.102144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.102288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.102467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.102599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.102731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.102858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.102978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.103176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.103347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.103487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.103657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.103792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.103943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.103971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.104088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.104115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.377 [2024-10-01 01:53:46.104219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.377 [2024-10-01 01:53:46.104246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.377 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.104346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.104372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.104486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.104514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.104632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.104661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.104796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.104824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.104959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.104987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.105100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.105128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.105243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.105269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.105417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.105454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.105581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.105608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.105714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.105741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.105853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.105880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.106862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.106888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.107863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.107889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.108877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.108903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.109033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.109072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.109188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.109216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.109356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.109382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.109512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.109538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.109648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.378 [2024-10-01 01:53:46.109675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.378 qpair failed and we were unable to recover it. 00:36:06.378 [2024-10-01 01:53:46.109787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.109815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.109952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.109979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.110127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.110156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.110296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.110323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.110437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.110461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.110582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.110608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.110723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.110751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.110861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.110888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.111951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.111976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.112955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.112983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.113139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.113277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.113427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.113595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.113718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.113852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.113975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.114970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.114995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.115111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.115137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.115247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.115274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.115379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.115405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.115564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.115591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.115728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.115755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.115919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.115949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.116063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.116092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.116201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.379 [2024-10-01 01:53:46.116227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.379 qpair failed and we were unable to recover it. 00:36:06.379 [2024-10-01 01:53:46.116338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.116366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.116496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.116523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.116622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.116649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.116752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.116779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.116889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.116917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.117917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.117956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.118930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.118958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.119109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.119136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.119254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.119281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.119383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.119409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.119540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.119566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.119678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.119706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.119843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.119871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.120835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.120972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.121130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.121311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.121445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.121610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.121738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.121903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.121931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.122079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.122119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.122251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.122280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.122424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.122451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.380 [2024-10-01 01:53:46.122564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.380 [2024-10-01 01:53:46.122590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.380 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.122724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.122751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.122893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.122920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.123912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.123950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.124958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.124986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.125957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.125984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.126939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.126966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.127117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.127268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.127400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.127537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.127699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.127864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.127982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.128127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.128294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.128453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.128614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.128783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.128933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.128962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.381 qpair failed and we were unable to recover it. 00:36:06.381 [2024-10-01 01:53:46.129087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.381 [2024-10-01 01:53:46.129126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.129253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.129280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.129393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.129418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.129523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.129549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.129656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.129683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.129815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.129855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.130953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.130992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.131153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.131181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.131290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.131317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.131451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.131477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.131578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.131604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.131705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.131731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.131862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.131901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.132901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.132941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.133940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.133964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.134072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.134098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.134197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.134222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.134332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.134357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.134485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.134511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.134645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.134673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.382 [2024-10-01 01:53:46.134783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.382 [2024-10-01 01:53:46.134809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.382 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.134911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.134938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.135093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.135121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.135248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.135288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.135415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.135443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.135550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.135577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.135709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.135735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.135876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.135903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.136915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.136943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.137929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.137968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.138929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.138954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.139871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.139896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.140888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.140916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.141028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.383 [2024-10-01 01:53:46.141058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.383 qpair failed and we were unable to recover it. 00:36:06.383 [2024-10-01 01:53:46.141165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.141289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.141419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.141545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.141689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.141826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.141966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.141993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.142139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.142277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.142417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.142552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.142687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.142848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.142970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.143136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.143316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.143441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.143595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.143737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.143878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.143906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.144861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.144893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.145891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.145919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.146906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.146934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.147056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.147095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.147215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.147241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.384 qpair failed and we were unable to recover it. 00:36:06.384 [2024-10-01 01:53:46.147379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.384 [2024-10-01 01:53:46.147404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.147509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.147536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.147648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.147673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.147813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.147840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.147951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.147979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.148110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.148149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.148272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.148308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.148413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.148441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.148563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.148589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.148699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.148727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.148854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.148893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.149931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.149959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.150110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.150137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.150241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.150268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:06.385 [2024-10-01 01:53:46.150417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.150444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.150553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.150580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:06.385 [2024-10-01 01:53:46.150716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.150744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.150891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.150919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:06.385 [2024-10-01 01:53:46.151026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.151059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.151160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:06.385 [2024-10-01 01:53:46.151186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.151300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.151326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.151436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.151462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.151613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.151638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.151752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.151778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.151887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.151912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.152916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.152941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.153069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.153094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.153197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.153222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.153340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.153365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.153508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.153534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.385 qpair failed and we were unable to recover it. 00:36:06.385 [2024-10-01 01:53:46.153644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.385 [2024-10-01 01:53:46.153669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-10-01 01:53:46.153771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.386 [2024-10-01 01:53:46.153797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.386 qpair failed and we were unable to recover it. 00:36:06.386 [2024-10-01 01:53:46.153906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.153932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.154936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.154961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.155918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.155944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.156928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.156954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.157085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.157112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.659 [2024-10-01 01:53:46.157228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.659 [2024-10-01 01:53:46.157268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.659 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.157409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.157438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.157602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.157629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.157756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.157783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.157892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.157919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.158909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.158935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.159058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.159085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.159190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.159217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.159349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.159376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.159510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.159536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.159687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.159727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.159863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.159892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.160914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.160940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.161932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.161959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.162138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.162282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.162429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.162573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.162709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.660 [2024-10-01 01:53:46.162884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.660 qpair failed and we were unable to recover it. 00:36:06.660 [2024-10-01 01:53:46.162984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.163848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.163975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.164176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.164308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.164443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.164595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.164726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.164922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.164948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.165109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.165307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.165452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.165613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.165745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.165873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.165984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.166872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.166994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.167170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.167306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.167475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.167607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.167769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.167931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.167958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.168075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.168102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.168210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.168236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.168338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.661 [2024-10-01 01:53:46.168365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.661 qpair failed and we were unable to recover it. 00:36:06.661 [2024-10-01 01:53:46.168481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.168514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.168625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.168652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.168803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.168831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.168961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.168988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.169143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.169170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.169304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.169332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.169435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.169461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.169579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.169618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.169784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.169812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.169914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.169941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.170085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.170216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.170352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.170546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.170704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.170878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.170987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.171876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.171990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.172146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.172286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.172457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.172599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.172743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.172912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.172952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.173948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.173974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.662 qpair failed and we were unable to recover it. 00:36:06.662 [2024-10-01 01:53:46.174126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.662 [2024-10-01 01:53:46.174153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.174259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.174284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.174427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.174452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.174560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.174587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.174697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.174722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.174869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.174909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.175023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.175173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.175309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.663 [2024-10-01 01:53:46.175482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:06.663 [2024-10-01 01:53:46.175646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.663 [2024-10-01 01:53:46.175788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.663 [2024-10-01 01:53:46.175924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.175954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.176127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.176264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.176416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.176580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.176718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.176857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.176993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.177863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.177973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.178142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.178274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.178440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.178606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.178744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.178868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.663 [2024-10-01 01:53:46.178893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.663 qpair failed and we were unable to recover it. 00:36:06.663 [2024-10-01 01:53:46.179017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.179153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.179309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.179441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.179623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.179752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.179882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.179907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.180936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.180962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.181109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.181139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.181257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.181296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.181419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.181447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.181549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.181575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.181707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.181734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.181868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.181895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.182910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.182937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.183070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.183110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.183229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.183257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.183394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.183421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.183523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.183550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.183714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.183741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.183875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.183902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.184016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.664 [2024-10-01 01:53:46.184044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.664 qpair failed and we were unable to recover it. 00:36:06.664 [2024-10-01 01:53:46.184153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.184180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.184288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.184314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.184438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.184464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.184605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.184631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.184765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.184791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.184898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.184923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.185112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.185248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.185412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.185578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.185717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.185849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.185966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.186132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.186274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.186410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.186553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.186733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.186909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.186949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.187896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.187925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.188913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.188938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.189083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.189111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.189215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.189242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.189351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.189378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.189511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.189538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.665 [2024-10-01 01:53:46.189641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.665 [2024-10-01 01:53:46.189669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.665 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.189798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.189838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.189977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.190127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.190260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.190427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.190586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.190722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.190854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.190881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.191841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.191867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.192039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.192194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.192364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.192506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.192684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.192838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.192977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.193931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.193956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.194144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.194280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.194425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.194566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.194732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.194872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.194982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.195015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.195143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.666 [2024-10-01 01:53:46.195183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.666 qpair failed and we were unable to recover it. 00:36:06.666 [2024-10-01 01:53:46.195312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.195351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.195491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.195517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.195633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.195658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.195765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.195790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.195927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.195954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.196104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.196247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.196407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.196578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.196723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.196857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.196973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.197126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.197267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.197424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.197592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.197725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.197862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.197887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.198962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.198993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.199148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.199293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.199443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.199583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.199747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.199878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.199987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.200017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.200133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 Malloc0 00:36:06.667 [2024-10-01 01:53:46.200159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.200269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.200297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.200409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 [2024-10-01 01:53:46.200437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.200549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.667 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.667 [2024-10-01 01:53:46.200576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.667 qpair failed and we were unable to recover it. 00:36:06.667 [2024-10-01 01:53:46.200687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.200713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:06.668 [2024-10-01 01:53:46.200852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.668 [2024-10-01 01:53:46.200891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.668 [2024-10-01 01:53:46.201015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.201154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.201291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.201454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.201585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.201720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.201855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.201883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.202954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.202981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.203856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.668 [2024-10-01 01:53:46.203958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.203985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.204124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.204152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.204276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.204302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.204408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.204434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.204570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.204597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.204725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.204751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.204866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.204893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.205009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.205035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.668 qpair failed and we were unable to recover it. 00:36:06.668 [2024-10-01 01:53:46.205172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.668 [2024-10-01 01:53:46.205198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.205308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.205334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.205443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.205470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.205583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.205613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.205731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.205758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.205915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.205948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.206948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.206975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.207090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.207117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.207244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.207283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.207429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.207457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.207570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.207597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.207719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.207746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.207881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.207907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.208924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.208952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.209962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.209990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.210160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.210186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.210302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.210329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.210435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.210462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.210569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.210595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.669 [2024-10-01 01:53:46.210701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.669 [2024-10-01 01:53:46.210729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.669 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.210861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.210887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.210991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.211138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.211273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.211466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.211605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.211739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.211893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.211938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.212059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.670 [2024-10-01 01:53:46.212088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.212200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:06.670 [2024-10-01 01:53:46.212226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.212335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.670 [2024-10-01 01:53:46.212361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.670 [2024-10-01 01:53:46.212467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.212495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.212631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.212658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.212786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.212812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.212919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.212945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.213913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.213939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.214965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.214993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.215130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.215267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.215408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.215545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.215680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.670 [2024-10-01 01:53:46.215805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.670 qpair failed and we were unable to recover it. 00:36:06.670 [2024-10-01 01:53:46.215911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.215939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.216102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.216130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.216248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.216274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.216375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.216402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.216508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.216535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.216686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.216724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.216864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.216891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.217947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.217986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.218114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.218143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.218254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.218281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.218454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.218480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.218583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.218610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.218745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.218772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.218886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.218912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.219953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.219981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.671 [2024-10-01 01:53:46.220099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.220124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:06.671 [2024-10-01 01:53:46.220235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.220261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.671 [2024-10-01 01:53:46.220381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.220407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.671 [2024-10-01 01:53:46.220517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.220544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.220675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.220701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.220808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.220834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.221006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.221033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.221153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.221192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.221346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.671 [2024-10-01 01:53:46.221373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.671 qpair failed and we were unable to recover it. 00:36:06.671 [2024-10-01 01:53:46.221487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.221514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.221624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.221650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.221762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.221788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.221936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.221975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.222106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.222133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.222254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.222294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.222439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.222467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.222580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.222607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.222724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.222751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.222868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.222894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.223922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.223950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.224128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.224262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.224401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.224548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.224687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.224850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.224979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.225959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.225985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.226132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.226261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.226403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.226532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.226667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.672 [2024-10-01 01:53:46.226791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.672 qpair failed and we were unable to recover it. 00:36:06.672 [2024-10-01 01:53:46.226937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.226976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.227127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.227257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.227424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.227559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.227721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.227856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.227984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.228031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.228145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.673 [2024-10-01 01:53:46.228174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.228294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.228321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.673 [2024-10-01 01:53:46.228436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.228463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.673 [2024-10-01 01:53:46.228566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.228593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.228738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.228765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.228875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.228902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff504000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff500000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196b340 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.229873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.229912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.230843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.230869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.231018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.231045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.231156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.231181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.673 [2024-10-01 01:53:46.231320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.673 [2024-10-01 01:53:46.231345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.673 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.231452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-10-01 01:53:46.231479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.231576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-10-01 01:53:46.231602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.231703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-10-01 01:53:46.231730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.231843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-10-01 01:53:46.231868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.232009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.674 [2024-10-01 01:53:46.232036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff50c000b90 with addr=10.0.0.2, port=4420 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.232140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.674 [2024-10-01 01:53:46.234722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.234852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.234881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.234896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.234910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.234944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.674 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:06.674 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.674 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.674 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.674 01:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1064119 00:36:06.674 [2024-10-01 01:53:46.244514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.244636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.244665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.244680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.244694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.244724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.254543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.254684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.254712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.254726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.254739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.254770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.264590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.264708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.264735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.264749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.264762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.264793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.274526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.274654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.274680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.274695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.274708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.274739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.284540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.284656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.284683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.284697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.284710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.284739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.294593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.294697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.294723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.294737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.294751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.294780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.304673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.304787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.304820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.304834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.304847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.304876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.314649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.314764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.314791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.314805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.314817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.314847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.324665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.324775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.674 [2024-10-01 01:53:46.324801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.674 [2024-10-01 01:53:46.324816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.674 [2024-10-01 01:53:46.324834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.674 [2024-10-01 01:53:46.324868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.674 qpair failed and we were unable to recover it. 00:36:06.674 [2024-10-01 01:53:46.334726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.674 [2024-10-01 01:53:46.334841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.334868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.334883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.334895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.334925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.344705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.344815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.344842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.344856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.344869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.344912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.354727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.354844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.354871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.354886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.354903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.354933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.364754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.364873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.364900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.364915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.364927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.364958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.374812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.374934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.374960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.374975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.374988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.375026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.384846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.384960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.384986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.385009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.385024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.385053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.394827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.394945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.394970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.394990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.395014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.395045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.404874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.404984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.405021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.405036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.405049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.405079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.414907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.415025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.415057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.415078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.415093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.415136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.424931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.425066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.425093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.425107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.425120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.425150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.434969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.435100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.435126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.435141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.435154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.435197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.445012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.445125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.445152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.445165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.445178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.445209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.455022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.455137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.455164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.675 [2024-10-01 01:53:46.455179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.675 [2024-10-01 01:53:46.455191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.675 [2024-10-01 01:53:46.455222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.675 qpair failed and we were unable to recover it. 00:36:06.675 [2024-10-01 01:53:46.465062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.675 [2024-10-01 01:53:46.465181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.675 [2024-10-01 01:53:46.465207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.676 [2024-10-01 01:53:46.465221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.676 [2024-10-01 01:53:46.465234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.676 [2024-10-01 01:53:46.465265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-10-01 01:53:46.475093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.676 [2024-10-01 01:53:46.475214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.676 [2024-10-01 01:53:46.475241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.676 [2024-10-01 01:53:46.475255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.676 [2024-10-01 01:53:46.475268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.676 [2024-10-01 01:53:46.475312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-10-01 01:53:46.485089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.676 [2024-10-01 01:53:46.485203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.676 [2024-10-01 01:53:46.485229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.676 [2024-10-01 01:53:46.485243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.676 [2024-10-01 01:53:46.485256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.676 [2024-10-01 01:53:46.485286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.676 [2024-10-01 01:53:46.495169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.676 [2024-10-01 01:53:46.495286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.676 [2024-10-01 01:53:46.495313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.676 [2024-10-01 01:53:46.495327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.676 [2024-10-01 01:53:46.495340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.676 [2024-10-01 01:53:46.495382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.676 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.505193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.505310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.505337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.505358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.505371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.505402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.515182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.515289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.515315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.515329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.515342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.515373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.525252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.525362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.525388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.525402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.525415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.525445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.535302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.535448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.535475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.535489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.535502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.535531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.545294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.545405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.545430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.545444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.545457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.545488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.555309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.555422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.555449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.555463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.555475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.555506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.565377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.565487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.565513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.565527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.565539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.565570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.575385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.575493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.575519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.575533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.575546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.575576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.585431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.585558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.585585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.585599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.585612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.585643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.595408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.595520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.937 [2024-10-01 01:53:46.595553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.937 [2024-10-01 01:53:46.595568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.937 [2024-10-01 01:53:46.595580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.937 [2024-10-01 01:53:46.595610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.937 qpair failed and we were unable to recover it. 00:36:06.937 [2024-10-01 01:53:46.605471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.937 [2024-10-01 01:53:46.605588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.605614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.605628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.605641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.605670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.615497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.615609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.615635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.615649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.615663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.615693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.625510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.625630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.625656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.625670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.625683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.625714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.635548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.635689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.635715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.635729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.635742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.635778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.645591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.645718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.645745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.645759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.645772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.645803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.655569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.655674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.655701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.655717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.655730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.655760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.665647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.665761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.665787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.665801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.665814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.665844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.675654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.675762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.675788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.675802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.675815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.675845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.685687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.685791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.685823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.685838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.685851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.685881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.695689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.695793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.695819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.695833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.695847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.695877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.705771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.705888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.705914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.705928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.705941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.705987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.715807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.715952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.715978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.715993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.716013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.716056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.725784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.725896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.938 [2024-10-01 01:53:46.725923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.938 [2024-10-01 01:53:46.725937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.938 [2024-10-01 01:53:46.725958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.938 [2024-10-01 01:53:46.725991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.938 qpair failed and we were unable to recover it. 00:36:06.938 [2024-10-01 01:53:46.735804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.938 [2024-10-01 01:53:46.735909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.939 [2024-10-01 01:53:46.735935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.939 [2024-10-01 01:53:46.735949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.939 [2024-10-01 01:53:46.735961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.939 [2024-10-01 01:53:46.736011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.939 qpair failed and we were unable to recover it. 00:36:06.939 [2024-10-01 01:53:46.745865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.939 [2024-10-01 01:53:46.746003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.939 [2024-10-01 01:53:46.746030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.939 [2024-10-01 01:53:46.746044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.939 [2024-10-01 01:53:46.746057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.939 [2024-10-01 01:53:46.746086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.939 qpair failed and we were unable to recover it. 00:36:06.939 [2024-10-01 01:53:46.755959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.939 [2024-10-01 01:53:46.756084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.939 [2024-10-01 01:53:46.756111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.939 [2024-10-01 01:53:46.756126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.939 [2024-10-01 01:53:46.756141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.939 [2024-10-01 01:53:46.756173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.939 qpair failed and we were unable to recover it. 00:36:06.939 [2024-10-01 01:53:46.765890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.939 [2024-10-01 01:53:46.765996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.939 [2024-10-01 01:53:46.766031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.939 [2024-10-01 01:53:46.766046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.939 [2024-10-01 01:53:46.766058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.939 [2024-10-01 01:53:46.766088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.939 qpair failed and we were unable to recover it. 00:36:06.939 [2024-10-01 01:53:46.775950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.939 [2024-10-01 01:53:46.776075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.939 [2024-10-01 01:53:46.776102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.939 [2024-10-01 01:53:46.776117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.939 [2024-10-01 01:53:46.776130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.939 [2024-10-01 01:53:46.776160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.939 qpair failed and we were unable to recover it. 00:36:06.939 [2024-10-01 01:53:46.785958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.939 [2024-10-01 01:53:46.786084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.939 [2024-10-01 01:53:46.786111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.939 [2024-10-01 01:53:46.786125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.939 [2024-10-01 01:53:46.786139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:06.939 [2024-10-01 01:53:46.786170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:06.939 qpair failed and we were unable to recover it. 00:36:07.200 [2024-10-01 01:53:46.795996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.200 [2024-10-01 01:53:46.796130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.200 [2024-10-01 01:53:46.796156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.200 [2024-10-01 01:53:46.796170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.200 [2024-10-01 01:53:46.796183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.200 [2024-10-01 01:53:46.796214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.200 qpair failed and we were unable to recover it. 00:36:07.200 [2024-10-01 01:53:46.806007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.200 [2024-10-01 01:53:46.806133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.200 [2024-10-01 01:53:46.806160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.200 [2024-10-01 01:53:46.806174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.200 [2024-10-01 01:53:46.806187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.200 [2024-10-01 01:53:46.806216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.200 qpair failed and we were unable to recover it. 00:36:07.200 [2024-10-01 01:53:46.816049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.200 [2024-10-01 01:53:46.816157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.200 [2024-10-01 01:53:46.816182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.200 [2024-10-01 01:53:46.816196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.200 [2024-10-01 01:53:46.816216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.200 [2024-10-01 01:53:46.816247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.200 qpair failed and we were unable to recover it. 00:36:07.200 [2024-10-01 01:53:46.826134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.200 [2024-10-01 01:53:46.826280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.200 [2024-10-01 01:53:46.826306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.826320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.826333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.826365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.836102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.836207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.836234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.836248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.836261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.836290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.846146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.846256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.846282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.846296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.846310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.846338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.856207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.856324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.856351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.856365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.856378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.856421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.866234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.866349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.866375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.866388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.866400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.866429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.876246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.876357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.876383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.876396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.876409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.876439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.886268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.886391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.886417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.886431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.886443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.886473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.896249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.896352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.896378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.896392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.896405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.896447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.906339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.906489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.906515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.906536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.906550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.906579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.916339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.916451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.916477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.916491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.916504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.916534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.926407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.926520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.926547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.926560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.926573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.926604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.936358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.936466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.936493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.936508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.936520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.936551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.946454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.946576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.946602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.946616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.201 [2024-10-01 01:53:46.946629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.201 [2024-10-01 01:53:46.946660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.201 qpair failed and we were unable to recover it. 00:36:07.201 [2024-10-01 01:53:46.956440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.201 [2024-10-01 01:53:46.956552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.201 [2024-10-01 01:53:46.956579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.201 [2024-10-01 01:53:46.956593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:46.956605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:46.956636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:46.966435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:46.966539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:46.966565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:46.966579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:46.966592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:46.966623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:46.976471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:46.976594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:46.976620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:46.976634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:46.976647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:46.976690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:46.986596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:46.986711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:46.986738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:46.986753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:46.986766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:46.986796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:46.996592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:46.996711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:46.996737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:46.996758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:46.996772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:46.996802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:47.006644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:47.006751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:47.006777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:47.006791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:47.006804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:47.006834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:47.016650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:47.016759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:47.016785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:47.016799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:47.016812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:47.016843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:47.026661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:47.026774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:47.026803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:47.026818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:47.026831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:47.026872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:47.036893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:47.037022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:47.037049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:47.037063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:47.037075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:47.037119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.202 [2024-10-01 01:53:47.046730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.202 [2024-10-01 01:53:47.046840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.202 [2024-10-01 01:53:47.046867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.202 [2024-10-01 01:53:47.046881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.202 [2024-10-01 01:53:47.046895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.202 [2024-10-01 01:53:47.046926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.202 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.056747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.056883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.056909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.056923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.056936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.056966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.066770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.066885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.066910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.066924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.066937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.066967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.076783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.076893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.076918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.076932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.076945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.076975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.086804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.086916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.086948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.086963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.086976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.087013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.096833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.096963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.096989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.097011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.097025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.097054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.106858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.106982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.107015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.107031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.107044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.107073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.116892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.117015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.117042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.117056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.117069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.117099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.127008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.127118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.127144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.127158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.127173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.127209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.136912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.137054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.137080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.137095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.137107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.137137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.146976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.147133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.147160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.147174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.147188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.147218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.156984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.463 [2024-10-01 01:53:47.157149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.463 [2024-10-01 01:53:47.157175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.463 [2024-10-01 01:53:47.157189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.463 [2024-10-01 01:53:47.157202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.463 [2024-10-01 01:53:47.157233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.463 qpair failed and we were unable to recover it. 00:36:07.463 [2024-10-01 01:53:47.167022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.167133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.167159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.167172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.167185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.167214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.177059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.177165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.177199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.177215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.177227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.177258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.187137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.187276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.187303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.187317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.187330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.187360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.197090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.197209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.197243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.197263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.197291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.197333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.207211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.207342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.207369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.207383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.207396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.207426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.217185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.217301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.217328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.217342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.217355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.217391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.227219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.227362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.227388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.227402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.227415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.227446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.237218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.237325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.237352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.237366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.237378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.237409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.247227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.247363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.247390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.247404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.247417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.247447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.257279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.257384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.257410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.257425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.257438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.257468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.267356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.267474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.267500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.267514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.267527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.267558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.277341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.277454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.277481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.277494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.277511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.277543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.287391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.287502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.464 [2024-10-01 01:53:47.287528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.464 [2024-10-01 01:53:47.287542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.464 [2024-10-01 01:53:47.287555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.464 [2024-10-01 01:53:47.287585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.464 qpair failed and we were unable to recover it. 00:36:07.464 [2024-10-01 01:53:47.297355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.464 [2024-10-01 01:53:47.297491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.465 [2024-10-01 01:53:47.297517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.465 [2024-10-01 01:53:47.297530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.465 [2024-10-01 01:53:47.297543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.465 [2024-10-01 01:53:47.297572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.465 qpair failed and we were unable to recover it. 00:36:07.465 [2024-10-01 01:53:47.307401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.465 [2024-10-01 01:53:47.307512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.465 [2024-10-01 01:53:47.307538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.465 [2024-10-01 01:53:47.307552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.465 [2024-10-01 01:53:47.307574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.465 [2024-10-01 01:53:47.307605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.465 qpair failed and we were unable to recover it. 00:36:07.724 [2024-10-01 01:53:47.317418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.724 [2024-10-01 01:53:47.317526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.724 [2024-10-01 01:53:47.317552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.724 [2024-10-01 01:53:47.317565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.724 [2024-10-01 01:53:47.317578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.724 [2024-10-01 01:53:47.317608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.724 qpair failed and we were unable to recover it. 00:36:07.724 [2024-10-01 01:53:47.327559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.724 [2024-10-01 01:53:47.327680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.724 [2024-10-01 01:53:47.327732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.724 [2024-10-01 01:53:47.327756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.724 [2024-10-01 01:53:47.327771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.724 [2024-10-01 01:53:47.327816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.724 qpair failed and we were unable to recover it. 00:36:07.724 [2024-10-01 01:53:47.337485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.724 [2024-10-01 01:53:47.337595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.724 [2024-10-01 01:53:47.337622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.724 [2024-10-01 01:53:47.337637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.724 [2024-10-01 01:53:47.337649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.724 [2024-10-01 01:53:47.337679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.724 qpair failed and we were unable to recover it. 00:36:07.724 [2024-10-01 01:53:47.347529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.724 [2024-10-01 01:53:47.347684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.724 [2024-10-01 01:53:47.347711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.724 [2024-10-01 01:53:47.347725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.724 [2024-10-01 01:53:47.347738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.724 [2024-10-01 01:53:47.347781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.724 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.357563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.357678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.357705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.357719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.357732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.357762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.367555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.367666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.367693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.367707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.367720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.367751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.377607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.377718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.377744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.377758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.377775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.377807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.387692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.387851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.387877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.387891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.387904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.387948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.397685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.397840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.397866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.397886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.397900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.397943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.407693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.407827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.407853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.407867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.407880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.407909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.417748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.417880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.417907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.417923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.417937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.417968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.427765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.427879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.427904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.427918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.427931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.427961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.437803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.437919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.437945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.437958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.437971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.438007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.447821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.447924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.447950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.447964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.447977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.448015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.457842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.457969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.458003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.458020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.458034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.458063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.467874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.467992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.468029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.468048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.468061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.468092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.477915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.725 [2024-10-01 01:53:47.478074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.725 [2024-10-01 01:53:47.478101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.725 [2024-10-01 01:53:47.478115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.725 [2024-10-01 01:53:47.478129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.725 [2024-10-01 01:53:47.478173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.725 qpair failed and we were unable to recover it. 00:36:07.725 [2024-10-01 01:53:47.487938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.488060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.488086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.488109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.488123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.488154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.498063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.498198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.498224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.498237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.498250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.498281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.507988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.508139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.508166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.508180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.508193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.508223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.518016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.518130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.518156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.518170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.518183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.518214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.528135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.528246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.528272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.528286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.528300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.528330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.538082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.538196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.538222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.538236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.538249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.538280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.548136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.548250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.548275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.548289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.548302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.548332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.558121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.558269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.558296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.558310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.558323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.558365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.726 [2024-10-01 01:53:47.568202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.726 [2024-10-01 01:53:47.568313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.726 [2024-10-01 01:53:47.568338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.726 [2024-10-01 01:53:47.568353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.726 [2024-10-01 01:53:47.568366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.726 [2024-10-01 01:53:47.568397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.726 qpair failed and we were unable to recover it. 00:36:07.985 [2024-10-01 01:53:47.578222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.985 [2024-10-01 01:53:47.578339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.985 [2024-10-01 01:53:47.578371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.985 [2024-10-01 01:53:47.578386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.985 [2024-10-01 01:53:47.578399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.985 [2024-10-01 01:53:47.578428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.985 qpair failed and we were unable to recover it. 00:36:07.985 [2024-10-01 01:53:47.588348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.985 [2024-10-01 01:53:47.588461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.985 [2024-10-01 01:53:47.588487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.985 [2024-10-01 01:53:47.588501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.985 [2024-10-01 01:53:47.588514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.985 [2024-10-01 01:53:47.588544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.985 qpair failed and we were unable to recover it. 00:36:07.985 [2024-10-01 01:53:47.598276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.985 [2024-10-01 01:53:47.598434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.985 [2024-10-01 01:53:47.598460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.985 [2024-10-01 01:53:47.598474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.985 [2024-10-01 01:53:47.598487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.985 [2024-10-01 01:53:47.598517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.985 qpair failed and we were unable to recover it. 00:36:07.985 [2024-10-01 01:53:47.608300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.608403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.608429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.608443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.608456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.608499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.618267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.618371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.618397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.618411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.618424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.618461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.628385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.628521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.628547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.628562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.628575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.628604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.638357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.638467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.638493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.638507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.638520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.638562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.648424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.648577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.648603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.648617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.648632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.648662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.658409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.658518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.658544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.658558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.658571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.658602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.668420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.668530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.668562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.668577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.668590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.668619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.678478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.678586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.678612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.678627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.678640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.678670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.688531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.688640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.688667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.688681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.688694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.688724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.698476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.698578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.698604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.698618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.698631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.698661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.708576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.708699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.708725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.708739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.708752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.708788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.718633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.718741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.718767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.718781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.718794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.718825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.728616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.728738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.986 [2024-10-01 01:53:47.728764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.986 [2024-10-01 01:53:47.728777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.986 [2024-10-01 01:53:47.728790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.986 [2024-10-01 01:53:47.728823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.986 qpair failed and we were unable to recover it. 00:36:07.986 [2024-10-01 01:53:47.738648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.986 [2024-10-01 01:53:47.738761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.738788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.738803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.738816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.738858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.748659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.748804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.748829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.748843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.748856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.748887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.758706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.758820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.758852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.758866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.758879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.758909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.768716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.768840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.768866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.768880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.768894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.768924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.778754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.778892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.778919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.778933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.778946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.778976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.788778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.788907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.788932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.788946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.788959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.788989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.798798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.798924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.798950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.798964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.798983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.799021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.808808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.808913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.808938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.808953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.808966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.808995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.818855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.818984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.819019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.819034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.819046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.819075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:07.987 [2024-10-01 01:53:47.828885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.987 [2024-10-01 01:53:47.829010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.987 [2024-10-01 01:53:47.829036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.987 [2024-10-01 01:53:47.829051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.987 [2024-10-01 01:53:47.829064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:07.987 [2024-10-01 01:53:47.829095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.987 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.838943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.839069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.839096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.839110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.839123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.839169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.848968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.849091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.849118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.849133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.849145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.849176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.858989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.859132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.859170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.859184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.859197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.859228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.869021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.869132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.869156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.869169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.869182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.869211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.879062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.879192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.879220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.879241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.879258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.879295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.889056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.889177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.889204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.889217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.889236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.889269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.899082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.245 [2024-10-01 01:53:47.899194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.245 [2024-10-01 01:53:47.899220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.245 [2024-10-01 01:53:47.899234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.245 [2024-10-01 01:53:47.899247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.245 [2024-10-01 01:53:47.899279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.245 qpair failed and we were unable to recover it. 00:36:08.245 [2024-10-01 01:53:47.909174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.909309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.909335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.909349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.909361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.909390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.919158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.919271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.919297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.919312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.919325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.919355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.929164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.929271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.929308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.929322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.929335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.929365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.939233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.939342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.939370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.939385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.939397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.939427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.949289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.949409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.949435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.949449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.949462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.949505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.959251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.959367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.959394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.959408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.959421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.959452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.969293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.969408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.969435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.969449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.969461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.969491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.979396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.979530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.979556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.979576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.979590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.979619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.989373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.989490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.989516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.989530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.989543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.989573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:47.999363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:47.999473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:47.999499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:47.999513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:47.999526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:47.999555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:48.009388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:48.009497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:48.009522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:48.009536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:48.009549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:48.009580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:48.019402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:48.019512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:48.019539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:48.019553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:48.019566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:48.019596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:48.029441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.246 [2024-10-01 01:53:48.029560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.246 [2024-10-01 01:53:48.029585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.246 [2024-10-01 01:53:48.029600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.246 [2024-10-01 01:53:48.029613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.246 [2024-10-01 01:53:48.029643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.246 qpair failed and we were unable to recover it. 00:36:08.246 [2024-10-01 01:53:48.039481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.247 [2024-10-01 01:53:48.039590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.247 [2024-10-01 01:53:48.039616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.247 [2024-10-01 01:53:48.039631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.247 [2024-10-01 01:53:48.039644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.247 [2024-10-01 01:53:48.039673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-10-01 01:53:48.049515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.247 [2024-10-01 01:53:48.049622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.247 [2024-10-01 01:53:48.049648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.247 [2024-10-01 01:53:48.049662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.247 [2024-10-01 01:53:48.049675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.247 [2024-10-01 01:53:48.049705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-10-01 01:53:48.059589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.247 [2024-10-01 01:53:48.059698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.247 [2024-10-01 01:53:48.059724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.247 [2024-10-01 01:53:48.059738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.247 [2024-10-01 01:53:48.059751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.247 [2024-10-01 01:53:48.059781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-10-01 01:53:48.069573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.247 [2024-10-01 01:53:48.069684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.247 [2024-10-01 01:53:48.069715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.247 [2024-10-01 01:53:48.069730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.247 [2024-10-01 01:53:48.069743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.247 [2024-10-01 01:53:48.069773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-10-01 01:53:48.079608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.247 [2024-10-01 01:53:48.079740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.247 [2024-10-01 01:53:48.079766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.247 [2024-10-01 01:53:48.079780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.247 [2024-10-01 01:53:48.079793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.247 [2024-10-01 01:53:48.079823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.247 [2024-10-01 01:53:48.089614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.247 [2024-10-01 01:53:48.089731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.247 [2024-10-01 01:53:48.089757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.247 [2024-10-01 01:53:48.089770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.247 [2024-10-01 01:53:48.089784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.247 [2024-10-01 01:53:48.089813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.247 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.099670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.099802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.099828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.099842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.099855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.099886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.109700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.109827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.109854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.109868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.109881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.109912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.119728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.119839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.119865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.119879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.119892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.119922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.129743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.129852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.129878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.129892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.129905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.129935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.139759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.139880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.139906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.139920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.139933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.139963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.149788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.149953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.149979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.149993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.150018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.150049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.159835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.508 [2024-10-01 01:53:48.159948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.508 [2024-10-01 01:53:48.159980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.508 [2024-10-01 01:53:48.159995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.508 [2024-10-01 01:53:48.160020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.508 [2024-10-01 01:53:48.160051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.508 qpair failed and we were unable to recover it. 00:36:08.508 [2024-10-01 01:53:48.169861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.169966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.169992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.170016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.170030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.170074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.179879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.179988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.180021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.180036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.180049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.180078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.189932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.190058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.190085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.190102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.190122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.190166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.199943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.200082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.200108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.200123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.200136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.200175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.209965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.210083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.210109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.210123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.210136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.210167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.219977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.220086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.220112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.220127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.220139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.220170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.230085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.230244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.230270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.230285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.230298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.230339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.240067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.240181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.240207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.240222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.240236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.240266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.250090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.250241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.250272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.250287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.250300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.250337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.260093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.260200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.260226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.260240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.260254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.260283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.270154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.270267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.270293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.270306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.270319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.270350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.280272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.280391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.280417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.280431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.280443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.280473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.290202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.509 [2024-10-01 01:53:48.290313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.509 [2024-10-01 01:53:48.290339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.509 [2024-10-01 01:53:48.290353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.509 [2024-10-01 01:53:48.290371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.509 [2024-10-01 01:53:48.290404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.509 qpair failed and we were unable to recover it. 00:36:08.509 [2024-10-01 01:53:48.300235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.300345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.300371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.300384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.300396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.300426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.510 [2024-10-01 01:53:48.310318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.310478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.310504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.310518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.310531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.310561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.510 [2024-10-01 01:53:48.320306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.320408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.320434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.320448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.320461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.320504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.510 [2024-10-01 01:53:48.330303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.330417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.330443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.330457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.330470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.330501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.510 [2024-10-01 01:53:48.340379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.340490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.340516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.340530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.340543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.340572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.510 [2024-10-01 01:53:48.350443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.350585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.350612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.350626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.350639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.350670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.510 [2024-10-01 01:53:48.360445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.510 [2024-10-01 01:53:48.360560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.510 [2024-10-01 01:53:48.360586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.510 [2024-10-01 01:53:48.360601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.510 [2024-10-01 01:53:48.360613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.510 [2024-10-01 01:53:48.360644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.510 qpair failed and we were unable to recover it. 00:36:08.770 [2024-10-01 01:53:48.370467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.770 [2024-10-01 01:53:48.370579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.770 [2024-10-01 01:53:48.370605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.770 [2024-10-01 01:53:48.370619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.770 [2024-10-01 01:53:48.370632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.770 [2024-10-01 01:53:48.370662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.770 qpair failed and we were unable to recover it. 00:36:08.770 [2024-10-01 01:53:48.380508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.770 [2024-10-01 01:53:48.380656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.770 [2024-10-01 01:53:48.380681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.770 [2024-10-01 01:53:48.380696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.770 [2024-10-01 01:53:48.380715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.770 [2024-10-01 01:53:48.380747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.770 qpair failed and we were unable to recover it. 00:36:08.770 [2024-10-01 01:53:48.390493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.770 [2024-10-01 01:53:48.390602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.770 [2024-10-01 01:53:48.390628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.770 [2024-10-01 01:53:48.390642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.770 [2024-10-01 01:53:48.390655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.770 [2024-10-01 01:53:48.390685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.770 qpair failed and we were unable to recover it. 00:36:08.770 [2024-10-01 01:53:48.400516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.770 [2024-10-01 01:53:48.400622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.770 [2024-10-01 01:53:48.400648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.770 [2024-10-01 01:53:48.400662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.770 [2024-10-01 01:53:48.400674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.770 [2024-10-01 01:53:48.400704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.770 qpair failed and we were unable to recover it. 00:36:08.770 [2024-10-01 01:53:48.410561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.770 [2024-10-01 01:53:48.410676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.770 [2024-10-01 01:53:48.410703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.770 [2024-10-01 01:53:48.410717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.770 [2024-10-01 01:53:48.410729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.770 [2024-10-01 01:53:48.410759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.770 qpair failed and we were unable to recover it. 00:36:08.770 [2024-10-01 01:53:48.420602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.770 [2024-10-01 01:53:48.420713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.770 [2024-10-01 01:53:48.420739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.770 [2024-10-01 01:53:48.420753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.770 [2024-10-01 01:53:48.420765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.770 [2024-10-01 01:53:48.420796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.770 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.430682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.430795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.430821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.430835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.430848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.430878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.440727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.440834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.440860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.440874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.440887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.440918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.450655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.450760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.450787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.450801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.450813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.450843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.460731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.460894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.460920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.460934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.460946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.460988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.470815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.470929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.470955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.470976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.470990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.471028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.480736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.480848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.480874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.480887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.480900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.480931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.490787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.490892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.490918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.490931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.490944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.490975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.500816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.500923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.500949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.500963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.500976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.501013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.510870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.510991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.511026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.511045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.511059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.511090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.520928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.521056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.521083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.521097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.521110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.521140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.530894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.531011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.531037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.531051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.531063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.531093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.540935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.541057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.541084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.541098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.541112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.541141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.771 [2024-10-01 01:53:48.551022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.771 [2024-10-01 01:53:48.551157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.771 [2024-10-01 01:53:48.551183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.771 [2024-10-01 01:53:48.551197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.771 [2024-10-01 01:53:48.551210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.771 [2024-10-01 01:53:48.551240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.771 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.561049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.561160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.561186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.561206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.561220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.561250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.571026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.571139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.571164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.571178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.571194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.571224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.581108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.581257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.581283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.581297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.581310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.581340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.591104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.591264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.591291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.591305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.591318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.591348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.601112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.601244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.601270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.601284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.601296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.601326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.611132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.611239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.611265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.611278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.611291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.611322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:08.772 [2024-10-01 01:53:48.621225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.772 [2024-10-01 01:53:48.621341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.772 [2024-10-01 01:53:48.621367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.772 [2024-10-01 01:53:48.621382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.772 [2024-10-01 01:53:48.621395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:08.772 [2024-10-01 01:53:48.621437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:08.772 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.631216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.631343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.631371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.631386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.631399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.631431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.641244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.641353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.641380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.641394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.641407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.641438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.651236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.651348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.651380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.651395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.651408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.651438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.661268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.661409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.661435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.661450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.661462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.661493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.671325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.671449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.671475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.671489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.671502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.671532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.681374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.681507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.681532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.681546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.681559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.681589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.691456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.691606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.691632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.691646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.691659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.691696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.701398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.701546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.701572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.701586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.701600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.701629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.711474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.711590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.711616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.711630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.711643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.711672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.721489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.721615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.032 [2024-10-01 01:53:48.721641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.032 [2024-10-01 01:53:48.721655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.032 [2024-10-01 01:53:48.721668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.032 [2024-10-01 01:53:48.721698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.032 qpair failed and we were unable to recover it. 00:36:09.032 [2024-10-01 01:53:48.731497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.032 [2024-10-01 01:53:48.731628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.731653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.731668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.731681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.731711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.741501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.741610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.741642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.741657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.741670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.741700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.751570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.751692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.751719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.751733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.751746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.751775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.761582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.761712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.761738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.761752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.761765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.761795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.771620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.771722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.771748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.771762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.771776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.771805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.781660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.781773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.781800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.781814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.781827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.781863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.791687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.791847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.791874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.791888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.791900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.791943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.801710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.801819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.801845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.801859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.801872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.801902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.811719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.811834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.811860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.811874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.811891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.811921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.821716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.821821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.821848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.821862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.821874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.821903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.831817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.831991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.832028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.832042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.832055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.832086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.841785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.841893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.841919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.841933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.841945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.841975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.851828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.851946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.851972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.851986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.852006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.852040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.861838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.861943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.861969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.861983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.862004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.862036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.871883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.872006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.872031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.872045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.872063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.872093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.033 [2024-10-01 01:53:48.881893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.033 [2024-10-01 01:53:48.882007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.033 [2024-10-01 01:53:48.882033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.033 [2024-10-01 01:53:48.882047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.033 [2024-10-01 01:53:48.882060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.033 [2024-10-01 01:53:48.882090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.033 qpair failed and we were unable to recover it. 00:36:09.297 [2024-10-01 01:53:48.891946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.297 [2024-10-01 01:53:48.892071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.297 [2024-10-01 01:53:48.892098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.297 [2024-10-01 01:53:48.892112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.297 [2024-10-01 01:53:48.892125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.297 [2024-10-01 01:53:48.892157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.297 qpair failed and we were unable to recover it. 00:36:09.297 [2024-10-01 01:53:48.901986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.297 [2024-10-01 01:53:48.902108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.297 [2024-10-01 01:53:48.902134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.297 [2024-10-01 01:53:48.902148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.297 [2024-10-01 01:53:48.902160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.297 [2024-10-01 01:53:48.902191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.297 qpair failed and we were unable to recover it. 00:36:09.297 [2024-10-01 01:53:48.912030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.297 [2024-10-01 01:53:48.912151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.297 [2024-10-01 01:53:48.912177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.297 [2024-10-01 01:53:48.912191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.297 [2024-10-01 01:53:48.912204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.297 [2024-10-01 01:53:48.912235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.297 qpair failed and we were unable to recover it. 00:36:09.297 [2024-10-01 01:53:48.922090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.297 [2024-10-01 01:53:48.922238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.297 [2024-10-01 01:53:48.922265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.297 [2024-10-01 01:53:48.922279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.297 [2024-10-01 01:53:48.922292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.297 [2024-10-01 01:53:48.922322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.297 qpair failed and we were unable to recover it. 00:36:09.297 [2024-10-01 01:53:48.932056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.297 [2024-10-01 01:53:48.932163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.297 [2024-10-01 01:53:48.932190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.932204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.932217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.932247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:48.942112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:48.942220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:48.942245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.942260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.942272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.942303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:48.952128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:48.952238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:48.952265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.952279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.952292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.952322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:48.962154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:48.962298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:48.962324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.962344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.962358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.962389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:48.972193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:48.972305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:48.972332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.972346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.972359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.972389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:48.982202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:48.982309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:48.982334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.982348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.982361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.982391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:48.992262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:48.992372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:48.992398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:48.992412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:48.992424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:48.992467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:49.002353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:49.002477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:49.002503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:49.002517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:49.002530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:49.002567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:49.012337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:49.012468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:49.012495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:49.012509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:49.012522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:49.012552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:49.022398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.298 [2024-10-01 01:53:49.022527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.298 [2024-10-01 01:53:49.022553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.298 [2024-10-01 01:53:49.022568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.298 [2024-10-01 01:53:49.022581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.298 [2024-10-01 01:53:49.022612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.298 qpair failed and we were unable to recover it. 00:36:09.298 [2024-10-01 01:53:49.032468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.032590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.032616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.032630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.032644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.032673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.042481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.042597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.042623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.042637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.042650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.042680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.052553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.052677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.052703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.052724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.052738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.052769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.062493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.062621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.062648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.062661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.062675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.062705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.072506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.072619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.072645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.072659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.072672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.072701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.082548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.082661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.082687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.082702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.082715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.082757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.092537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.092665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.092693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.092714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.092732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.092765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.102592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.102697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.102723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.102737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.102750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.102780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.112632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.112754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.112780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.112794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.112807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.112836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.122643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.122803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.122829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.122843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.122856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.122886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.132672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.132789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.132816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.132829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.132842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.132885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.299 [2024-10-01 01:53:49.142705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.299 [2024-10-01 01:53:49.142814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.299 [2024-10-01 01:53:49.142845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.299 [2024-10-01 01:53:49.142859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.299 [2024-10-01 01:53:49.142872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.299 [2024-10-01 01:53:49.142902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.299 qpair failed and we were unable to recover it. 00:36:09.559 [2024-10-01 01:53:49.152739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.559 [2024-10-01 01:53:49.152855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.559 [2024-10-01 01:53:49.152882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.559 [2024-10-01 01:53:49.152896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.559 [2024-10-01 01:53:49.152908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.559 [2024-10-01 01:53:49.152938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.559 qpair failed and we were unable to recover it. 00:36:09.559 [2024-10-01 01:53:49.162732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.559 [2024-10-01 01:53:49.162845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.559 [2024-10-01 01:53:49.162871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.559 [2024-10-01 01:53:49.162885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.559 [2024-10-01 01:53:49.162898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.559 [2024-10-01 01:53:49.162929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.559 qpair failed and we were unable to recover it. 00:36:09.559 [2024-10-01 01:53:49.172828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.559 [2024-10-01 01:53:49.172932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.559 [2024-10-01 01:53:49.172959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.559 [2024-10-01 01:53:49.172972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.559 [2024-10-01 01:53:49.172986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.559 [2024-10-01 01:53:49.173023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.559 qpair failed and we were unable to recover it. 00:36:09.559 [2024-10-01 01:53:49.182807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.559 [2024-10-01 01:53:49.182929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.559 [2024-10-01 01:53:49.182955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.559 [2024-10-01 01:53:49.182969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.559 [2024-10-01 01:53:49.182982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.559 [2024-10-01 01:53:49.183028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.559 qpair failed and we were unable to recover it. 00:36:09.559 [2024-10-01 01:53:49.192875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.559 [2024-10-01 01:53:49.193041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.559 [2024-10-01 01:53:49.193068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.559 [2024-10-01 01:53:49.193082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.559 [2024-10-01 01:53:49.193095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.559 [2024-10-01 01:53:49.193126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.559 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.202866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.202979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.203014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.203030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.203043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.203072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.212870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.212985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.213018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.213033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.213046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.213076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.222904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.223018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.223044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.223058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.223070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.223101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.232948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.233074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.233106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.233121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.233134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.233164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.242961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.243084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.243110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.243123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.243137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.243166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.252974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.253143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.253169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.253183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.253196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.253227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.263107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.263237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.263263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.263277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.263289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.263321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.273060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.273172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.273198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.273213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.273226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.273274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.283068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.283172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.283198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.283212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.283225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.283256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.293082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.293209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.293234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.293248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.293261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.293290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.303108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.303212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.303238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.303252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.303265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.303307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.313159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.313273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.313299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.313313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.313326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.313367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.560 [2024-10-01 01:53:49.323195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.560 [2024-10-01 01:53:49.323304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.560 [2024-10-01 01:53:49.323336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.560 [2024-10-01 01:53:49.323351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.560 [2024-10-01 01:53:49.323364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.560 [2024-10-01 01:53:49.323393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.560 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.333206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.333319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.333345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.333359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.333372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.333403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.343231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.343374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.343400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.343414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.343428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.343457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.353280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.353396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.353422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.353436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.353449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.353480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.363336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.363448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.363475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.363489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.363508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.363539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.373328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.373469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.373495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.373510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.373522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.373565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.383383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.383489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.383522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.383541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.383555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.383586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.393367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.393477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.393503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.393517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.393530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.393560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.561 [2024-10-01 01:53:49.403400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.561 [2024-10-01 01:53:49.403558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.561 [2024-10-01 01:53:49.403585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.561 [2024-10-01 01:53:49.403598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.561 [2024-10-01 01:53:49.403611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.561 [2024-10-01 01:53:49.403642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.561 qpair failed and we were unable to recover it. 00:36:09.820 [2024-10-01 01:53:49.413456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.820 [2024-10-01 01:53:49.413585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.820 [2024-10-01 01:53:49.413612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.820 [2024-10-01 01:53:49.413626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.820 [2024-10-01 01:53:49.413642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.820 [2024-10-01 01:53:49.413672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.820 qpair failed and we were unable to recover it. 00:36:09.820 [2024-10-01 01:53:49.423474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.820 [2024-10-01 01:53:49.423578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.820 [2024-10-01 01:53:49.423605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.820 [2024-10-01 01:53:49.423619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.820 [2024-10-01 01:53:49.423632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.820 [2024-10-01 01:53:49.423662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.820 qpair failed and we were unable to recover it. 00:36:09.820 [2024-10-01 01:53:49.433535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.820 [2024-10-01 01:53:49.433650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.820 [2024-10-01 01:53:49.433677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.820 [2024-10-01 01:53:49.433690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.820 [2024-10-01 01:53:49.433703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.820 [2024-10-01 01:53:49.433732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.820 qpair failed and we were unable to recover it. 00:36:09.820 [2024-10-01 01:53:49.443575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.820 [2024-10-01 01:53:49.443697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.820 [2024-10-01 01:53:49.443724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.820 [2024-10-01 01:53:49.443738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.820 [2024-10-01 01:53:49.443754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.820 [2024-10-01 01:53:49.443796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.820 qpair failed and we were unable to recover it. 00:36:09.820 [2024-10-01 01:53:49.453576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.820 [2024-10-01 01:53:49.453680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.820 [2024-10-01 01:53:49.453707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.820 [2024-10-01 01:53:49.453721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.820 [2024-10-01 01:53:49.453740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.820 [2024-10-01 01:53:49.453773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.820 qpair failed and we were unable to recover it. 00:36:09.820 [2024-10-01 01:53:49.463578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.820 [2024-10-01 01:53:49.463712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.820 [2024-10-01 01:53:49.463738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.820 [2024-10-01 01:53:49.463752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.820 [2024-10-01 01:53:49.463765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.820 [2024-10-01 01:53:49.463795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.473691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.473807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.473835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.473854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.473868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.473911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.483620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.483725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.483752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.483766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.483779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.483810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.493711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.493876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.493902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.493916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.493929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.493958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.503760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.503906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.503932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.503946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.503958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.503988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.513748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.513877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.513902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.513916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.513929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.513959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.523744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.523849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.523875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.523890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.523903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.523933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.533753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.533882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.533908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.533922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.533935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.533965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.543840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.543953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.543978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.544006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.544022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.544053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.553857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.553980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.554015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.554035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.554048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.554079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.563884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.563995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.564030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.564045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.564058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.564087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.573864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.573971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.574004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.574021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.574036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.574066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.583924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.584041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.584067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.584081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.584093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.584123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.821 [2024-10-01 01:53:49.593961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.821 [2024-10-01 01:53:49.594078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.821 [2024-10-01 01:53:49.594105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.821 [2024-10-01 01:53:49.594119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.821 [2024-10-01 01:53:49.594132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.821 [2024-10-01 01:53:49.594162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.821 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.603950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.604066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.604092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.604105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.604119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.604150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.613986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.614113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.614139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.614154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.614167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.614196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.624029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.624135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.624161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.624175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.624188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.624217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.634060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.634191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.634222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.634237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.634250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.634280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.644106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.644217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.644243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.644256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.644269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.644300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.654153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.654263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.654289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.654303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.654316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.654346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:09.822 [2024-10-01 01:53:49.664120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.822 [2024-10-01 01:53:49.664227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.822 [2024-10-01 01:53:49.664254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.822 [2024-10-01 01:53:49.664268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.822 [2024-10-01 01:53:49.664280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:09.822 [2024-10-01 01:53:49.664310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.822 qpair failed and we were unable to recover it. 00:36:10.081 [2024-10-01 01:53:49.674175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.081 [2024-10-01 01:53:49.674290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.081 [2024-10-01 01:53:49.674316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.674330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.674344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.674373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.684187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.684296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.684322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.684335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.684348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.684378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.694219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.694330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.694356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.694371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.694384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.694414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.704265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.704424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.704450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.704465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.704478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.704507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.714298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.714421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.714448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.714462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.714475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.714506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.724304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.724415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.724450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.724465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.724478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.724508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.734450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.734583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.734609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.734623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.734636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.734666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.744353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.744457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.744483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.744497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.744510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.744541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.754397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.754509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.754536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.754549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.754561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.754604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.764486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.764633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.764659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.764674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.764686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.764722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.774463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.774576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.774602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.774616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.774629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.774659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.784454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.784556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.784582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.784597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.784609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.784640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.794519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.794687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.082 [2024-10-01 01:53:49.794714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.082 [2024-10-01 01:53:49.794728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.082 [2024-10-01 01:53:49.794741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.082 [2024-10-01 01:53:49.794772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-10-01 01:53:49.804559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.082 [2024-10-01 01:53:49.804682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.804709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.804723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.804736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.804766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.814575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.814701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.814732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.814747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.814759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.814790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.824608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.824721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.824747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.824761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.824774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.824803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.834653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.834770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.834795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.834809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.834822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.834852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.844627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.844733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.844759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.844773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.844786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.844815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.854695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.854813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.854840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.854854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.854872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.854903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.864683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.864783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.864809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.864824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.864837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.864867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.874776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.874929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.874953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.874967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.874979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.875017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.884783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.884890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.884916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.884930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.884943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.884974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.894788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.894915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.894943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.894958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.894974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.895028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.904815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.904941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.904968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.904982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.904995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.905038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.914866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.914978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.915013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.915029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.915042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.915071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-10-01 01:53:49.924880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.083 [2024-10-01 01:53:49.925021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.083 [2024-10-01 01:53:49.925048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.083 [2024-10-01 01:53:49.925062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.083 [2024-10-01 01:53:49.925075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.083 [2024-10-01 01:53:49.925106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.343 [2024-10-01 01:53:49.934919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.343 [2024-10-01 01:53:49.935048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.343 [2024-10-01 01:53:49.935074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.343 [2024-10-01 01:53:49.935088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.343 [2024-10-01 01:53:49.935101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.343 [2024-10-01 01:53:49.935132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.343 qpair failed and we were unable to recover it. 00:36:10.343 [2024-10-01 01:53:49.944904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.343 [2024-10-01 01:53:49.945020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.343 [2024-10-01 01:53:49.945046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.343 [2024-10-01 01:53:49.945061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.343 [2024-10-01 01:53:49.945080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.343 [2024-10-01 01:53:49.945110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.343 qpair failed and we were unable to recover it. 00:36:10.343 [2024-10-01 01:53:49.955046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.343 [2024-10-01 01:53:49.955171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.343 [2024-10-01 01:53:49.955197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.343 [2024-10-01 01:53:49.955210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.343 [2024-10-01 01:53:49.955223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.343 [2024-10-01 01:53:49.955254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.343 qpair failed and we were unable to recover it. 00:36:10.343 [2024-10-01 01:53:49.965003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.343 [2024-10-01 01:53:49.965118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.343 [2024-10-01 01:53:49.965144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.343 [2024-10-01 01:53:49.965158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.343 [2024-10-01 01:53:49.965171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.343 [2024-10-01 01:53:49.965201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.343 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:49.975017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:49.975128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:49.975154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:49.975168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:49.975181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:49.975212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:49.985075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:49.985179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:49.985206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:49.985221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:49.985234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:49.985280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:49.995073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:49.995184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:49.995210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:49.995224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:49.995236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:49.995267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.005142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.005252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.005278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.005292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.005305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.005336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.015256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.015369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.015398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.015413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.015426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.015457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.025195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.025306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.025333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.025348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.025361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.025391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.035290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.035434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.035463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.035490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.035507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.035550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.045216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.045329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.045355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.045370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.045383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.045413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.055235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.055351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.055378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.055392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.055406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.055436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.065304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.065416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.065443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.065458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.065471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.065501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.075387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.075505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.075531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.075545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.075558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.075588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.085345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.085453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.085479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.085493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.085506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.085537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.095368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.344 [2024-10-01 01:53:50.095474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.344 [2024-10-01 01:53:50.095500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.344 [2024-10-01 01:53:50.095514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.344 [2024-10-01 01:53:50.095526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.344 [2024-10-01 01:53:50.095568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.344 qpair failed and we were unable to recover it. 00:36:10.344 [2024-10-01 01:53:50.105362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.105470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.105497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.105511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.105524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.105554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.115418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.115548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.115573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.115587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.115600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.115631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.125460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.125585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.125611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.125632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.125646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.125676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.135470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.135575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.135601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.135615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.135628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.135658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.145553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.145665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.145691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.145706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.145719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.145749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.155539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.155652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.155678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.155693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.155707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.155736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.165554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.165683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.165709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.165723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.165737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.165766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.175664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.175775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.175801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.175815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.175829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.175859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.185593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.185709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.185735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.185749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.345 [2024-10-01 01:53:50.185762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.345 [2024-10-01 01:53:50.185793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.345 qpair failed and we were unable to recover it. 00:36:10.345 [2024-10-01 01:53:50.195685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.345 [2024-10-01 01:53:50.195812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.345 [2024-10-01 01:53:50.195837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.345 [2024-10-01 01:53:50.195851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.605 [2024-10-01 01:53:50.195864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.605 [2024-10-01 01:53:50.195896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.605 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.205742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.205889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.205916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.205930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.205943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.205973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.215764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.215921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.215956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.215972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.215985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.216024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.225819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.225964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.225991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.226015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.226029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.226059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.235810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.235927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.235953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.235967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.235980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.236018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.245830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.245944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.245970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.245984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.246007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.246041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.255806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.255924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.255950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.255964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.255977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.256025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.265878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.265987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.266021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.266036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.266049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.266081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.275886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.276014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.276040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.276054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.276067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.276097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.285912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.286066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.286093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.286107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.286120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.286151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.295926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.296050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.296076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.296091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.296104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.296134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.305946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.306079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.306121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.306136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.306149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.306180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.316015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.316130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.316156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.316170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.316183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.316213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.326033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.606 [2024-10-01 01:53:50.326144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.606 [2024-10-01 01:53:50.326170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.606 [2024-10-01 01:53:50.326184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.606 [2024-10-01 01:53:50.326197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.606 [2024-10-01 01:53:50.326227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.606 qpair failed and we were unable to recover it. 00:36:10.606 [2024-10-01 01:53:50.336083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.336196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.336222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.336235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.336249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.336279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.346120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.346228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.346254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.346268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.346281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.346317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.356124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.356246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.356271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.356285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.356298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.356328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.366130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.366244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.366270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.366284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.366297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.366328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.376165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.376268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.376294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.376307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.376320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.376363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.386200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.386321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.386355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.386370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.386384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.386414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.396219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.396336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.396363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.396376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.396390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.396420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.406312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.406434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.406464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.406480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.406493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.406524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.416280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.416389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.416416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.416430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.416443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.416472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.426288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.426432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.426458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.426472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.426485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.426514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.436365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.436522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.436548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.436562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.436584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.436617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.446461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.446587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.446613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.446627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.446640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.446682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.607 [2024-10-01 01:53:50.456435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.607 [2024-10-01 01:53:50.456553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.607 [2024-10-01 01:53:50.456579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.607 [2024-10-01 01:53:50.456593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.607 [2024-10-01 01:53:50.456606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.607 [2024-10-01 01:53:50.456636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.607 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.466438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.466548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.466574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.466596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.466609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.466640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.476540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.476698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.476724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.476737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.476750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.476780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.486506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.486616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.486643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.486657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.486670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.486700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.496651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.496768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.496797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.496812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.496825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.496856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.506593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.506719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.506745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.506759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.506773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.506803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.516619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.516730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.516756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.516770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.516783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.516825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.526626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.526757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.526783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.526806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.526820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.526850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.536606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.536732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.536758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.536771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.536784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.536815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.546725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.546854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.546880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.546895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.546908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.546938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.556683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.556799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.556825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.556839] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.556852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.556883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.566719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.566828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.566854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.566868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.566881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.566911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.869 qpair failed and we were unable to recover it. 00:36:10.869 [2024-10-01 01:53:50.576691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.869 [2024-10-01 01:53:50.576803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.869 [2024-10-01 01:53:50.576830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.869 [2024-10-01 01:53:50.576843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.869 [2024-10-01 01:53:50.576856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.869 [2024-10-01 01:53:50.576887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.586737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.586867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.586893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.586908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.586921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.586951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.596821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.596966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.597004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.597022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.597036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.597067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.606801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.606915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.606941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.606955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.606968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.607004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.616827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.616953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.616979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.617009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.617025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.617055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.626857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.626962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.626988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.627010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.627025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.627055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.636889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.637032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.637057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.637071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.637084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.637115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.646932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.647043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.647070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.647084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.647096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.647125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.656943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.657082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.657108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.657122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.657135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.657166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.666956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.667069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.667096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.667110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.667123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.667153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.677049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.677188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.677213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.677227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.677241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.677271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.687052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.687168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.687194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.687209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.687225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.687255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.697093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.697201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.697227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.697241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.697254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.697285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.870 [2024-10-01 01:53:50.707062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.870 [2024-10-01 01:53:50.707175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.870 [2024-10-01 01:53:50.707206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.870 [2024-10-01 01:53:50.707222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.870 [2024-10-01 01:53:50.707235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.870 [2024-10-01 01:53:50.707266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.870 qpair failed and we were unable to recover it. 00:36:10.871 [2024-10-01 01:53:50.717149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.871 [2024-10-01 01:53:50.717259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.871 [2024-10-01 01:53:50.717285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.871 [2024-10-01 01:53:50.717300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.871 [2024-10-01 01:53:50.717313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:10.871 [2024-10-01 01:53:50.717343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.871 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.727165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.727317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.727344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.727358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.727371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.727413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.737209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.737357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.737383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.737397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.737410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.737441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.747184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.747296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.747324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.747339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.747356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.747394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.757257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.757367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.757394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.757408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.757421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.757451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.767231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.767342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.767369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.767383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.767396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.767426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.777313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.777427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.777452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.777466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.777480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.777510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.787324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.787451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.787477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.787490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.787503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.787534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.797393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.797506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.797538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.797552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.797565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.797596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.807379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.131 [2024-10-01 01:53:50.807499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.131 [2024-10-01 01:53:50.807525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.131 [2024-10-01 01:53:50.807540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.131 [2024-10-01 01:53:50.807552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.131 [2024-10-01 01:53:50.807581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.131 qpair failed and we were unable to recover it. 00:36:11.131 [2024-10-01 01:53:50.817465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.817584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.817610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.817624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.817637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.817668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.827455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.827613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.827639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.827652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.827664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.827693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.837431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.837540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.837566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.837580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.837593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.837629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.847508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.847628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.847656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.847670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.847688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.847719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.857508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.857618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.857644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.857658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.857669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.857699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.867511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.867647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.867674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.867688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.867701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.867731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.877556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.877688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.877713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.877726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.877738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.877767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.887613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.887724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.887756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.887773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.887786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.887817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.897608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.897716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.897743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.897760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.897773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.897815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.907718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.907822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.907848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.907862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.907875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.907905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.917701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.917858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.917884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.917898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.917911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.917942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.927688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.927800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.927826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.927840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.927859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.927891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.937747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.937868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.132 [2024-10-01 01:53:50.937895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.132 [2024-10-01 01:53:50.937912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.132 [2024-10-01 01:53:50.937927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.132 [2024-10-01 01:53:50.937958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.132 qpair failed and we were unable to recover it. 00:36:11.132 [2024-10-01 01:53:50.947740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.132 [2024-10-01 01:53:50.947847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.133 [2024-10-01 01:53:50.947874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.133 [2024-10-01 01:53:50.947888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.133 [2024-10-01 01:53:50.947901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.133 [2024-10-01 01:53:50.947930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.133 qpair failed and we were unable to recover it. 00:36:11.133 [2024-10-01 01:53:50.957804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.133 [2024-10-01 01:53:50.957934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.133 [2024-10-01 01:53:50.957960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.133 [2024-10-01 01:53:50.957974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.133 [2024-10-01 01:53:50.957987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.133 [2024-10-01 01:53:50.958033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.133 qpair failed and we were unable to recover it. 00:36:11.133 [2024-10-01 01:53:50.967797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.133 [2024-10-01 01:53:50.967917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.133 [2024-10-01 01:53:50.967943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.133 [2024-10-01 01:53:50.967957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.133 [2024-10-01 01:53:50.967970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.133 [2024-10-01 01:53:50.968008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.133 qpair failed and we were unable to recover it. 00:36:11.133 [2024-10-01 01:53:50.977825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.133 [2024-10-01 01:53:50.977944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.133 [2024-10-01 01:53:50.977970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.133 [2024-10-01 01:53:50.977984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.133 [2024-10-01 01:53:50.978004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.133 [2024-10-01 01:53:50.978051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.133 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:50.987865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:50.987984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:50.988018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:50.988035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:50.988048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:50.988083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:50.997908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:50.998069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:50.998096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:50.998110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:50.998123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:50.998153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.007927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.008058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.008085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.008099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.008112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.008142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.017953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.018079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.018105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.018120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.018148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.018180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.027952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.028109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.028136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.028150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.028163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.028193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.038028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.038142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.038169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.038183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.038196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.038226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.048056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.048166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.048191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.048205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.048218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.048248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.058043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.058155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.058181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.058196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.058209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.058238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.393 [2024-10-01 01:53:51.068086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.393 [2024-10-01 01:53:51.068236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.393 [2024-10-01 01:53:51.068262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.393 [2024-10-01 01:53:51.068276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.393 [2024-10-01 01:53:51.068289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.393 [2024-10-01 01:53:51.068319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.393 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.078111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.078264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.078290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.078305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.078318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.078348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.088141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.088253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.088279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.088293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.088306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.088337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.098168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.098279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.098305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.098319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.098332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.098362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.108186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.108289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.108315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.108335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.108349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.108380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.118257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.118379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.118404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.118419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.118431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.118461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.128334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.128495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.128521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.128534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.128547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.128577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.138286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.138396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.138422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.138436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.138448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.138489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.148326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.148430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.148456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.148470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.148483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.148512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.158379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.158487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.158513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.158527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.158539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.158569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.168408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.168516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.168543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.168557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.168569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.168601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.178405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.178517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.178543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.178558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.178570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.178599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.188428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.188534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.188560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.188575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.188588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.188617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.394 [2024-10-01 01:53:51.198484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.394 [2024-10-01 01:53:51.198598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.394 [2024-10-01 01:53:51.198629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.394 [2024-10-01 01:53:51.198644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.394 [2024-10-01 01:53:51.198658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.394 [2024-10-01 01:53:51.198687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.394 qpair failed and we were unable to recover it. 00:36:11.395 [2024-10-01 01:53:51.208557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.395 [2024-10-01 01:53:51.208666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.395 [2024-10-01 01:53:51.208692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.395 [2024-10-01 01:53:51.208706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.395 [2024-10-01 01:53:51.208719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.395 [2024-10-01 01:53:51.208748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-10-01 01:53:51.218567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.395 [2024-10-01 01:53:51.218684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.395 [2024-10-01 01:53:51.218710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.395 [2024-10-01 01:53:51.218724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.395 [2024-10-01 01:53:51.218737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.395 [2024-10-01 01:53:51.218766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-10-01 01:53:51.228570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.395 [2024-10-01 01:53:51.228678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.395 [2024-10-01 01:53:51.228703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.395 [2024-10-01 01:53:51.228717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.395 [2024-10-01 01:53:51.228731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.395 [2024-10-01 01:53:51.228761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.395 [2024-10-01 01:53:51.238630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.395 [2024-10-01 01:53:51.238760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.395 [2024-10-01 01:53:51.238788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.395 [2024-10-01 01:53:51.238803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.395 [2024-10-01 01:53:51.238820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.395 [2024-10-01 01:53:51.238853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.395 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.248661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.248775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.248801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.248815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.248828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.248858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.258665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.258767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.258794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.258808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.258820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.258851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.268797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.268932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.268958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.268972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.268985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.269022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.278714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.278830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.278856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.278870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.278883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.278913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.288723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.288825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.288857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.288872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.288885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.288915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.298770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.298894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.298921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.298935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.298948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.298978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.308839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.308942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.308968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.308982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.308995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.309033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.318834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.318948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.318974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.318989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.319010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.319042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.328845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.328955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.328981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.328995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.329019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.329064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.338878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.339008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.339035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.339052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.339065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.339094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.348895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.349010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.349037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.349051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.349064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.349096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.358976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.656 [2024-10-01 01:53:51.359144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.656 [2024-10-01 01:53:51.359171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.656 [2024-10-01 01:53:51.359185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.656 [2024-10-01 01:53:51.359199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.656 [2024-10-01 01:53:51.359228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.656 qpair failed and we were unable to recover it. 00:36:11.656 [2024-10-01 01:53:51.368981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.369101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.369128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.369142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.369154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.369185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.378987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.379131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.379162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.379177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.379190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.379220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.389105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.389247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.389282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.389299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.389313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.389350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.399049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.399176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.399203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.399217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.399229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.399259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.409062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.409172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.409198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.409212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.409224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.409254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.419103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.419214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.419240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.419254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.419273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.419303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.429133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.429244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.429270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.429284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.429298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.429340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.439180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.439292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.439318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.439332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.439345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.439375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.449200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.449314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.449342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.449357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.449370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.449412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.459239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.459349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.459376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.459390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.459403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.459434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.469246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.469366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.469393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.469407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.469420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.469450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.479305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.479430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.479457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.479471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.479484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.479514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.489404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.657 [2024-10-01 01:53:51.489538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.657 [2024-10-01 01:53:51.489565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.657 [2024-10-01 01:53:51.489578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.657 [2024-10-01 01:53:51.489591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.657 [2024-10-01 01:53:51.489620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.657 qpair failed and we were unable to recover it. 00:36:11.657 [2024-10-01 01:53:51.499319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.658 [2024-10-01 01:53:51.499444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.658 [2024-10-01 01:53:51.499470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.658 [2024-10-01 01:53:51.499484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.658 [2024-10-01 01:53:51.499497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.658 [2024-10-01 01:53:51.499527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.658 qpair failed and we were unable to recover it. 00:36:11.917 [2024-10-01 01:53:51.509358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.917 [2024-10-01 01:53:51.509463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.917 [2024-10-01 01:53:51.509489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.917 [2024-10-01 01:53:51.509503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.917 [2024-10-01 01:53:51.509522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.917 [2024-10-01 01:53:51.509553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-10-01 01:53:51.519406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.917 [2024-10-01 01:53:51.519526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.917 [2024-10-01 01:53:51.519552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.917 [2024-10-01 01:53:51.519566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.917 [2024-10-01 01:53:51.519579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.917 [2024-10-01 01:53:51.519608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-10-01 01:53:51.529421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.917 [2024-10-01 01:53:51.529552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.917 [2024-10-01 01:53:51.529579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.917 [2024-10-01 01:53:51.529593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.917 [2024-10-01 01:53:51.529606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.917 [2024-10-01 01:53:51.529648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-10-01 01:53:51.539472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.917 [2024-10-01 01:53:51.539599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.917 [2024-10-01 01:53:51.539625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.917 [2024-10-01 01:53:51.539638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.917 [2024-10-01 01:53:51.539651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.917 [2024-10-01 01:53:51.539682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-10-01 01:53:51.549538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.917 [2024-10-01 01:53:51.549645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.917 [2024-10-01 01:53:51.549671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.917 [2024-10-01 01:53:51.549685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.917 [2024-10-01 01:53:51.549697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.917 [2024-10-01 01:53:51.549727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.917 qpair failed and we were unable to recover it. 00:36:11.917 [2024-10-01 01:53:51.559556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.917 [2024-10-01 01:53:51.559665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.559690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.559705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.559718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.559746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.569601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.569767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.569793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.569807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.569819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.569849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.579592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.579714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.579739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.579753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.579766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.579796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.589600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.589706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.589732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.589746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.589758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.589787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.599662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.599782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.599808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.599829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.599842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.599872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.609689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.609810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.609836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.609850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.609863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.609893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.619701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.619816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.619843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.619857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.619870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.619901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.629719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.629872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.629898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.629912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.629925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.629956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.639747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.639867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.639893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.639906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.639920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.639950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.649793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.649914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.649940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.649955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.649968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.650005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.659785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.659914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.659940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.659954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.659967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.660005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.669912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.670061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.670088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.670102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.670115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.670158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.679894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.680015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.680042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.918 [2024-10-01 01:53:51.680056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.918 [2024-10-01 01:53:51.680069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.918 [2024-10-01 01:53:51.680100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.918 qpair failed and we were unable to recover it. 00:36:11.918 [2024-10-01 01:53:51.689891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.918 [2024-10-01 01:53:51.690028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.918 [2024-10-01 01:53:51.690055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.690080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.690095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.690126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.699968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.700087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.700113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.700127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.700140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.700171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.709936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.710061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.710087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.710102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.710117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.710146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.719978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.720153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.720180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.720194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.720207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.720250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.730018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.730130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.730156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.730170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.730184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.730214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.740022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.740176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.740202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.740215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.740228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.740258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.750030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.750139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.750165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.750179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.750192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.750222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.760096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.919 [2024-10-01 01:53:51.760209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.919 [2024-10-01 01:53:51.760235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.919 [2024-10-01 01:53:51.760249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.919 [2024-10-01 01:53:51.760263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:11.919 [2024-10-01 01:53:51.760294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.919 qpair failed and we were unable to recover it. 00:36:11.919 [2024-10-01 01:53:51.770147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.178 [2024-10-01 01:53:51.770257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.178 [2024-10-01 01:53:51.770285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.178 [2024-10-01 01:53:51.770298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.178 [2024-10-01 01:53:51.770312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.178 [2024-10-01 01:53:51.770342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.178 qpair failed and we were unable to recover it. 00:36:12.178 [2024-10-01 01:53:51.780133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.178 [2024-10-01 01:53:51.780247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.178 [2024-10-01 01:53:51.780279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.178 [2024-10-01 01:53:51.780294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.178 [2024-10-01 01:53:51.780307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.178 [2024-10-01 01:53:51.780336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.178 qpair failed and we were unable to recover it. 00:36:12.178 [2024-10-01 01:53:51.790242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.178 [2024-10-01 01:53:51.790387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.178 [2024-10-01 01:53:51.790414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.178 [2024-10-01 01:53:51.790428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.178 [2024-10-01 01:53:51.790441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.178 [2024-10-01 01:53:51.790471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.178 qpair failed and we were unable to recover it. 00:36:12.178 [2024-10-01 01:53:51.800235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.178 [2024-10-01 01:53:51.800400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.178 [2024-10-01 01:53:51.800426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.178 [2024-10-01 01:53:51.800440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.178 [2024-10-01 01:53:51.800453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.178 [2024-10-01 01:53:51.800482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.178 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.810279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.810394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.810420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.810434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.810448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.810478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.820252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.820368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.820395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.820409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.820422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.820457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.830328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.830451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.830478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.830493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.830505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.830533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.840324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.840451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.840477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.840490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.840504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.840534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.850407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.850521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.850547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.850561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.850574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.850604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.860441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.860551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.860577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.860591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.860604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.860646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.870399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.870511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.870541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.870556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.870569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.870599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.880434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.880543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.880567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.880580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.880593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.880637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.890451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.890562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.890588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.890602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.890615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.890646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.900549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.900687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.900713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.900727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.900739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.900769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.910543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.910695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.910721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.910735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.910748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.910783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.920582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.920698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.920723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.920737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.920750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.179 [2024-10-01 01:53:51.920780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.179 qpair failed and we were unable to recover it. 00:36:12.179 [2024-10-01 01:53:51.930572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.179 [2024-10-01 01:53:51.930682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.179 [2024-10-01 01:53:51.930708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.179 [2024-10-01 01:53:51.930722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.179 [2024-10-01 01:53:51.930736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.930765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:51.940568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:51.940687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:51.940713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:51.940727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:51.940739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.940769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:51.950626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:51.950771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:51.950797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:51.950811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:51.950824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.950854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:51.960719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:51.960847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:51.960874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:51.960888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:51.960901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.960932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:51.970687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:51.970801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:51.970827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:51.970841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:51.970855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.970885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:51.980748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:51.980856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:51.980882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:51.980896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:51.980909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.980939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:51.990718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:51.990841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:51.990869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:51.990884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:51.990901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:51.990933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:52.000804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:52.000913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:52.000939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:52.000953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:52.000976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff504000b90 00:36:12.180 [2024-10-01 01:53:52.001015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:52.010804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:52.010923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:52.010956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:52.010972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:52.010985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x196b340 00:36:12.180 [2024-10-01 01:53:52.011027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:52.020807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:52.020911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:52.020938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:52.020952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:52.020965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x196b340 00:36:12.180 [2024-10-01 01:53:52.020994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.180 [2024-10-01 01:53:52.030855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.180 [2024-10-01 01:53:52.030968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.180 [2024-10-01 01:53:52.031012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.180 [2024-10-01 01:53:52.031032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.180 [2024-10-01 01:53:52.031046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff500000b90 00:36:12.180 [2024-10-01 01:53:52.031079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.180 qpair failed and we were unable to recover it. 00:36:12.438 [2024-10-01 01:53:52.040943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.438 [2024-10-01 01:53:52.041066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.438 [2024-10-01 01:53:52.041095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.439 [2024-10-01 01:53:52.041110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.439 [2024-10-01 01:53:52.041123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff500000b90 00:36:12.439 [2024-10-01 01:53:52.041155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-10-01 01:53:52.050933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.439 [2024-10-01 01:53:52.051064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.439 [2024-10-01 01:53:52.051097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.439 [2024-10-01 01:53:52.051113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.439 [2024-10-01 01:53:52.051127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff50c000b90 00:36:12.439 [2024-10-01 01:53:52.051172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-10-01 01:53:52.060961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.439 [2024-10-01 01:53:52.061091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.439 [2024-10-01 01:53:52.061120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.439 [2024-10-01 01:53:52.061137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.439 [2024-10-01 01:53:52.061150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff50c000b90 00:36:12.439 [2024-10-01 01:53:52.061181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:12.439 qpair failed and we were unable to recover it. 00:36:12.439 [2024-10-01 01:53:52.061281] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:12.439 A controller has encountered a failure and is being reset. 00:36:12.439 Controller properly reset. 00:36:12.439 Initializing NVMe Controllers 00:36:12.439 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:12.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:12.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:12.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:12.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:12.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:12.439 Initialization complete. Launching workers. 00:36:12.439 Starting thread on core 1 00:36:12.439 Starting thread on core 2 00:36:12.439 Starting thread on core 3 00:36:12.439 Starting thread on core 0 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:12.439 00:36:12.439 real 0m10.756s 00:36:12.439 user 0m18.565s 00:36:12.439 sys 0m5.375s 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.439 ************************************ 00:36:12.439 END TEST nvmf_target_disconnect_tc2 00:36:12.439 ************************************ 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.439 rmmod nvme_tcp 00:36:12.439 rmmod nvme_fabrics 00:36:12.439 rmmod nvme_keyring 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 1064550 ']' 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 1064550 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1064550 ']' 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1064550 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1064550 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1064550' 00:36:12.439 killing process with pid 1064550 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1064550 00:36:12.439 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1064550 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.697 01:53:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.235 01:53:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.235 00:36:15.235 real 0m15.593s 00:36:15.235 user 0m44.629s 00:36:15.235 sys 0m7.404s 00:36:15.235 01:53:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:15.235 01:53:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:15.235 ************************************ 00:36:15.235 END TEST nvmf_target_disconnect 00:36:15.235 ************************************ 00:36:15.235 01:53:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:15.235 00:36:15.235 real 6m44.802s 00:36:15.235 user 17m14.471s 00:36:15.235 sys 1m27.996s 00:36:15.235 01:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:15.235 01:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.235 ************************************ 00:36:15.235 END TEST nvmf_host 00:36:15.235 ************************************ 00:36:15.235 01:53:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:15.235 01:53:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:15.235 01:53:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:15.235 01:53:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:15.235 01:53:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:15.235 01:53:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.235 ************************************ 00:36:15.235 START TEST nvmf_target_core_interrupt_mode 00:36:15.235 ************************************ 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:15.235 * Looking for test storage... 00:36:15.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.235 --rc genhtml_branch_coverage=1 00:36:15.235 --rc genhtml_function_coverage=1 00:36:15.235 --rc genhtml_legend=1 00:36:15.235 --rc geninfo_all_blocks=1 00:36:15.235 --rc geninfo_unexecuted_blocks=1 00:36:15.235 00:36:15.235 ' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.235 --rc genhtml_branch_coverage=1 00:36:15.235 --rc genhtml_function_coverage=1 00:36:15.235 --rc genhtml_legend=1 00:36:15.235 --rc geninfo_all_blocks=1 00:36:15.235 --rc geninfo_unexecuted_blocks=1 00:36:15.235 00:36:15.235 ' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.235 --rc genhtml_branch_coverage=1 00:36:15.235 --rc genhtml_function_coverage=1 00:36:15.235 --rc genhtml_legend=1 00:36:15.235 --rc geninfo_all_blocks=1 00:36:15.235 --rc geninfo_unexecuted_blocks=1 00:36:15.235 00:36:15.235 ' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.235 --rc genhtml_branch_coverage=1 00:36:15.235 --rc genhtml_function_coverage=1 00:36:15.235 --rc genhtml_legend=1 00:36:15.235 --rc geninfo_all_blocks=1 00:36:15.235 --rc geninfo_unexecuted_blocks=1 00:36:15.235 00:36:15.235 ' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.235 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:15.236 ************************************ 00:36:15.236 START TEST nvmf_abort 00:36:15.236 ************************************ 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:15.236 * Looking for test storage... 00:36:15.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.236 --rc genhtml_branch_coverage=1 00:36:15.236 --rc genhtml_function_coverage=1 00:36:15.236 --rc genhtml_legend=1 00:36:15.236 --rc geninfo_all_blocks=1 00:36:15.236 --rc geninfo_unexecuted_blocks=1 00:36:15.236 00:36:15.236 ' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.236 --rc genhtml_branch_coverage=1 00:36:15.236 --rc genhtml_function_coverage=1 00:36:15.236 --rc genhtml_legend=1 00:36:15.236 --rc geninfo_all_blocks=1 00:36:15.236 --rc geninfo_unexecuted_blocks=1 00:36:15.236 00:36:15.236 ' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.236 --rc genhtml_branch_coverage=1 00:36:15.236 --rc genhtml_function_coverage=1 00:36:15.236 --rc genhtml_legend=1 00:36:15.236 --rc geninfo_all_blocks=1 00:36:15.236 --rc geninfo_unexecuted_blocks=1 00:36:15.236 00:36:15.236 ' 00:36:15.236 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:15.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.236 --rc genhtml_branch_coverage=1 00:36:15.237 --rc genhtml_function_coverage=1 00:36:15.237 --rc genhtml_legend=1 00:36:15.237 --rc geninfo_all_blocks=1 00:36:15.237 --rc geninfo_unexecuted_blocks=1 00:36:15.237 00:36:15.237 ' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.237 01:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:17.142 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:17.142 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:17.142 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:17.143 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:17.143 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:17.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:36:17.143 00:36:17.143 --- 10.0.0.2 ping statistics --- 00:36:17.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.143 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:36:17.143 00:36:17.143 --- 10.0.0.1 ping statistics --- 00:36:17.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.143 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.143 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=1067352 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 1067352 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1067352 ']' 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:17.144 01:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.404 [2024-10-01 01:53:57.028147] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:17.404 [2024-10-01 01:53:57.029204] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:36:17.404 [2024-10-01 01:53:57.029258] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.404 [2024-10-01 01:53:57.097691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:17.404 [2024-10-01 01:53:57.188907] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.404 [2024-10-01 01:53:57.188976] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.404 [2024-10-01 01:53:57.189012] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.404 [2024-10-01 01:53:57.189027] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.404 [2024-10-01 01:53:57.189038] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.404 [2024-10-01 01:53:57.189144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.404 [2024-10-01 01:53:57.189184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:17.404 [2024-10-01 01:53:57.189187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.664 [2024-10-01 01:53:57.289461] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:17.664 [2024-10-01 01:53:57.289668] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:17.664 [2024-10-01 01:53:57.289681] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:17.664 [2024-10-01 01:53:57.289944] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:17.664 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:17.664 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:17.664 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 [2024-10-01 01:53:57.345951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 Malloc0 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 Delay0 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 [2024-10-01 01:53:57.410130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.665 01:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:17.924 [2024-10-01 01:53:57.552211] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:19.827 Initializing NVMe Controllers 00:36:19.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:19.827 controller IO queue size 128 less than required 00:36:19.827 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:19.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:19.827 Initialization complete. Launching workers. 00:36:19.827 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29606 00:36:19.827 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29663, failed to submit 66 00:36:19.827 success 29606, unsuccessful 57, failed 0 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.827 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.827 rmmod nvme_tcp 00:36:19.827 rmmod nvme_fabrics 00:36:19.827 rmmod nvme_keyring 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 1067352 ']' 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 1067352 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1067352 ']' 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1067352 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067352 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067352' 00:36:20.086 killing process with pid 1067352 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1067352 00:36:20.086 01:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1067352 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.346 01:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:22.251 00:36:22.251 real 0m7.285s 00:36:22.251 user 0m9.454s 00:36:22.251 sys 0m2.826s 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:22.251 ************************************ 00:36:22.251 END TEST nvmf_abort 00:36:22.251 ************************************ 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:22.251 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.510 ************************************ 00:36:22.510 START TEST nvmf_ns_hotplug_stress 00:36:22.510 ************************************ 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:22.510 * Looking for test storage... 00:36:22.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:22.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.510 --rc genhtml_branch_coverage=1 00:36:22.510 --rc genhtml_function_coverage=1 00:36:22.510 --rc genhtml_legend=1 00:36:22.510 --rc geninfo_all_blocks=1 00:36:22.510 --rc geninfo_unexecuted_blocks=1 00:36:22.510 00:36:22.510 ' 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:22.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.510 --rc genhtml_branch_coverage=1 00:36:22.510 --rc genhtml_function_coverage=1 00:36:22.510 --rc genhtml_legend=1 00:36:22.510 --rc geninfo_all_blocks=1 00:36:22.510 --rc geninfo_unexecuted_blocks=1 00:36:22.510 00:36:22.510 ' 00:36:22.510 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:22.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.510 --rc genhtml_branch_coverage=1 00:36:22.510 --rc genhtml_function_coverage=1 00:36:22.511 --rc genhtml_legend=1 00:36:22.511 --rc geninfo_all_blocks=1 00:36:22.511 --rc geninfo_unexecuted_blocks=1 00:36:22.511 00:36:22.511 ' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:22.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.511 --rc genhtml_branch_coverage=1 00:36:22.511 --rc genhtml_function_coverage=1 00:36:22.511 --rc genhtml_legend=1 00:36:22.511 --rc geninfo_all_blocks=1 00:36:22.511 --rc geninfo_unexecuted_blocks=1 00:36:22.511 00:36:22.511 ' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.511 01:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:25.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:25.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:25.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:25.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:25.050 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:25.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:25.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:36:25.051 00:36:25.051 --- 10.0.0.2 ping statistics --- 00:36:25.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.051 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:25.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:25.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:36:25.051 00:36:25.051 --- 10.0.0.1 ping statistics --- 00:36:25.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.051 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=1069686 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 1069686 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1069686 ']' 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:25.051 [2024-10-01 01:54:04.570402] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:25.051 [2024-10-01 01:54:04.571555] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:36:25.051 [2024-10-01 01:54:04.571610] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:25.051 [2024-10-01 01:54:04.644078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:25.051 [2024-10-01 01:54:04.735638] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:25.051 [2024-10-01 01:54:04.735692] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:25.051 [2024-10-01 01:54:04.735722] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:25.051 [2024-10-01 01:54:04.735734] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:25.051 [2024-10-01 01:54:04.735745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:25.051 [2024-10-01 01:54:04.735832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:25.051 [2024-10-01 01:54:04.735896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:25.051 [2024-10-01 01:54:04.735898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.051 [2024-10-01 01:54:04.837425] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:25.051 [2024-10-01 01:54:04.837626] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:25.051 [2024-10-01 01:54:04.837636] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:25.051 [2024-10-01 01:54:04.837912] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:25.051 01:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:25.312 [2024-10-01 01:54:05.156627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.572 01:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:25.832 01:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:26.091 [2024-10-01 01:54:05.741063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.091 01:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:26.350 01:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:26.610 Malloc0 00:36:26.610 01:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:26.871 Delay0 00:36:26.871 01:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.161 01:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:27.419 NULL1 00:36:27.420 01:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:27.988 01:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1070257 00:36:27.988 01:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:27.988 01:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.988 01:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:28.925 Read completed with error (sct=0, sc=11) 00:36:28.926 01:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:28.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.184 01:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:29.184 01:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:29.441 true 00:36:29.698 01:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:29.698 01:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.262 01:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.521 01:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:30.521 01:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:30.779 true 00:36:30.779 01:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:30.779 01:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.037 01:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.296 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:31.296 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:31.554 true 00:36:31.811 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:31.811 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.068 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.325 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:32.325 01:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:32.583 true 00:36:32.583 01:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:32.583 01:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.519 01:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.776 01:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:33.776 01:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:34.033 true 00:36:34.033 01:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:34.033 01:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.359 01:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.616 01:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:34.616 01:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:34.616 true 00:36:34.874 01:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:34.874 01:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.441 01:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:35.698 01:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:35.698 01:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:35.956 true 00:36:35.956 01:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:35.956 01:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.522 01:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.522 01:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:36.522 01:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:36.780 true 00:36:36.780 01:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:36.780 01:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.713 01:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.971 01:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:37.971 01:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:38.229 true 00:36:38.229 01:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:38.229 01:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.487 01:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.745 01:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:38.745 01:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:39.003 true 00:36:39.003 01:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:39.003 01:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.938 01:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:40.196 01:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:40.196 01:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:40.454 true 00:36:40.454 01:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:40.454 01:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.712 01:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.970 01:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:40.970 01:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:41.228 true 00:36:41.228 01:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:41.228 01:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.162 01:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.420 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:42.420 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:42.679 true 00:36:42.679 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:42.679 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.937 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.196 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:43.196 01:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:43.454 true 00:36:43.454 01:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:43.454 01:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.712 01:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.971 01:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:43.971 01:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:44.230 true 00:36:44.230 01:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:44.230 01:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.167 01:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.425 01:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:45.425 01:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:45.683 true 00:36:45.683 01:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:45.683 01:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.941 01:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.507 01:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:46.507 01:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:46.507 true 00:36:46.507 01:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:46.507 01:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.441 01:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:47.698 01:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:47.698 01:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:47.956 true 00:36:47.956 01:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:47.956 01:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.214 01:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.474 01:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:48.474 01:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:48.732 true 00:36:48.732 01:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:48.732 01:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:49.666 01:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.924 01:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:49.924 01:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:50.181 true 00:36:50.181 01:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:50.181 01:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.439 01:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.696 01:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:50.696 01:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:50.953 true 00:36:50.953 01:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:50.953 01:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.884 01:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.884 01:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:51.884 01:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:52.140 true 00:36:52.140 01:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:52.140 01:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.397 01:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.962 01:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:52.962 01:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:52.962 true 00:36:52.962 01:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:52.962 01:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.219 01:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.477 01:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:53.477 01:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:53.735 true 00:36:53.992 01:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:53.992 01:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.924 01:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.182 01:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:55.182 01:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:55.440 true 00:36:55.440 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:55.440 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.697 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.954 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:55.954 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:56.212 true 00:36:56.212 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:56.212 01:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.469 01:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.727 01:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:56.727 01:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:56.985 true 00:36:56.985 01:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:56.985 01:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.920 01:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.178 Initializing NVMe Controllers 00:36:58.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:58.178 Controller IO queue size 128, less than required. 00:36:58.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:58.178 Controller IO queue size 128, less than required. 00:36:58.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:58.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:58.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:58.178 Initialization complete. Launching workers. 00:36:58.178 ======================================================== 00:36:58.178 Latency(us) 00:36:58.178 Device Information : IOPS MiB/s Average min max 00:36:58.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 869.70 0.42 72200.11 1928.01 1022682.20 00:36:58.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9622.65 4.70 13303.07 1738.31 455616.22 00:36:58.178 ======================================================== 00:36:58.179 Total : 10492.35 5.12 18184.98 1738.31 1022682.20 00:36:58.179 00:36:58.179 01:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:58.179 01:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:58.436 true 00:36:58.437 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1070257 00:36:58.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1070257) - No such process 00:36:58.437 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1070257 00:36:58.437 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.694 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.952 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:58.952 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:58.952 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:58.952 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:58.952 01:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:59.211 null0 00:36:59.211 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.211 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.211 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:59.468 null1 00:36:59.468 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.468 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.468 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:59.726 null2 00:36:59.984 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:59.984 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.984 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:00.241 null3 00:37:00.241 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.241 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.241 01:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:00.499 null4 00:37:00.499 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.499 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.499 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:00.790 null5 00:37:00.790 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.790 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.790 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:01.093 null6 00:37:01.093 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:01.093 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.093 01:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:01.357 null7 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:01.357 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1074606 1074607 1074609 1074610 1074613 1074615 1074617 1074619 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.358 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.617 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.876 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.135 01:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.394 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:02.652 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.652 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.652 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.652 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.652 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.912 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.912 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.912 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.171 01:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.430 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.689 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.948 01:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.207 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.208 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.208 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.208 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.208 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.208 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.208 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.466 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.466 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.466 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.466 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.466 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.466 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.725 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.725 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.984 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:05.243 01:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.502 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:05.761 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.019 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.020 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:06.278 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.278 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.278 01:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:06.278 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:06.278 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:06.278 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:06.278 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:06.278 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.278 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:06.538 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:06.538 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.796 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.797 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.054 01:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:07.314 rmmod nvme_tcp 00:37:07.314 rmmod nvme_fabrics 00:37:07.314 rmmod nvme_keyring 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 1069686 ']' 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 1069686 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1069686 ']' 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1069686 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:37:07.314 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:07.572 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069686 00:37:07.572 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:07.572 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:07.572 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069686' 00:37:07.572 killing process with pid 1069686 00:37:07.572 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1069686 00:37:07.572 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1069686 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.831 01:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:09.733 00:37:09.733 real 0m47.400s 00:37:09.733 user 3m17.576s 00:37:09.733 sys 0m22.546s 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:09.733 ************************************ 00:37:09.733 END TEST nvmf_ns_hotplug_stress 00:37:09.733 ************************************ 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:09.733 ************************************ 00:37:09.733 START TEST nvmf_delete_subsystem 00:37:09.733 ************************************ 00:37:09.733 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:09.992 * Looking for test storage... 00:37:09.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.992 --rc genhtml_branch_coverage=1 00:37:09.992 --rc genhtml_function_coverage=1 00:37:09.992 --rc genhtml_legend=1 00:37:09.992 --rc geninfo_all_blocks=1 00:37:09.992 --rc geninfo_unexecuted_blocks=1 00:37:09.992 00:37:09.992 ' 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.992 --rc genhtml_branch_coverage=1 00:37:09.992 --rc genhtml_function_coverage=1 00:37:09.992 --rc genhtml_legend=1 00:37:09.992 --rc geninfo_all_blocks=1 00:37:09.992 --rc geninfo_unexecuted_blocks=1 00:37:09.992 00:37:09.992 ' 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.992 --rc genhtml_branch_coverage=1 00:37:09.992 --rc genhtml_function_coverage=1 00:37:09.992 --rc genhtml_legend=1 00:37:09.992 --rc geninfo_all_blocks=1 00:37:09.992 --rc geninfo_unexecuted_blocks=1 00:37:09.992 00:37:09.992 ' 00:37:09.992 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:09.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.992 --rc genhtml_branch_coverage=1 00:37:09.992 --rc genhtml_function_coverage=1 00:37:09.992 --rc genhtml_legend=1 00:37:09.992 --rc geninfo_all_blocks=1 00:37:09.993 --rc geninfo_unexecuted_blocks=1 00:37:09.993 00:37:09.993 ' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:09.993 01:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:11.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:11.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:11.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:11.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:11.896 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.897 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:12.155 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:12.155 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:12.155 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:12.155 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:12.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:37:12.155 00:37:12.155 --- 10.0.0.2 ping statistics --- 00:37:12.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.156 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:12.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:37:12.156 00:37:12.156 --- 10.0.0.1 ping statistics --- 00:37:12.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.156 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=1077366 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 1077366 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1077366 ']' 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:12.156 01:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.156 [2024-10-01 01:54:51.853626] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:12.156 [2024-10-01 01:54:51.854744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:12.156 [2024-10-01 01:54:51.854801] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.156 [2024-10-01 01:54:51.924591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:12.415 [2024-10-01 01:54:52.016071] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.415 [2024-10-01 01:54:52.016132] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.415 [2024-10-01 01:54:52.016148] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.415 [2024-10-01 01:54:52.016163] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.415 [2024-10-01 01:54:52.016175] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.415 [2024-10-01 01:54:52.016267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.415 [2024-10-01 01:54:52.016274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.415 [2024-10-01 01:54:52.112014] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:12.415 [2024-10-01 01:54:52.112083] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:12.415 [2024-10-01 01:54:52.112325] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 [2024-10-01 01:54:52.161031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 [2024-10-01 01:54:52.197286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 NULL1 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 Delay0 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1077506 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:12.415 01:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:12.673 [2024-10-01 01:54:52.271684] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:14.571 01:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.571 01:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.571 01:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 [2024-10-01 01:54:54.312989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdac800d470 is same with the state(6) to be set 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Write completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.571 starting I/O failed: -6 00:37:14.571 Read completed with error (sct=0, sc=8) 00:37:14.572 [2024-10-01 01:54:54.313605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab40b0 is same with the state(6) to be set 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 [2024-10-01 01:54:54.314090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdac8000c00 is same with the state(6) to be set 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Write completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:14.572 Read completed with error (sct=0, sc=8) 00:37:15.506 [2024-10-01 01:54:55.287493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab1d00 is same with the state(6) to be set 00:37:15.506 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 [2024-10-01 01:54:55.313584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdac800cfe0 is same with the state(6) to be set 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 [2024-10-01 01:54:55.315747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdac800d7a0 is same with the state(6) to be set 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 [2024-10-01 01:54:55.316117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3ed0 is same with the state(6) to be set 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 Write completed with error (sct=0, sc=8) 00:37:15.507 Read completed with error (sct=0, sc=8) 00:37:15.507 [2024-10-01 01:54:55.316607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4290 is same with the state(6) to be set 00:37:15.507 Initializing NVMe Controllers 00:37:15.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:15.507 Controller IO queue size 128, less than required. 00:37:15.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:15.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:15.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:15.507 Initialization complete. Launching workers. 00:37:15.507 ======================================================== 00:37:15.507 Latency(us) 00:37:15.507 Device Information : IOPS MiB/s Average min max 00:37:15.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.68 0.08 904871.46 591.03 1012413.97 00:37:15.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.72 0.08 916529.41 1114.39 1013107.85 00:37:15.507 ======================================================== 00:37:15.507 Total : 327.40 0.16 910594.46 591.03 1013107.85 00:37:15.507 00:37:15.507 [2024-10-01 01:54:55.317070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab1d00 (9): Bad file descriptor 00:37:15.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:15.507 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.507 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:15.507 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1077506 00:37:15.507 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1077506 00:37:16.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1077506) - No such process 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1077506 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1077506 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1077506 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.076 [2024-10-01 01:54:55.837211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1077910 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:16.076 01:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:16.076 [2024-10-01 01:54:55.888835] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:16.642 01:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:16.642 01:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:16.642 01:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:17.207 01:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:17.207 01:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:17.207 01:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:17.771 01:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:17.771 01:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:17.771 01:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.028 01:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.028 01:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:18.028 01:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.592 01:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.592 01:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:18.592 01:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.156 01:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.156 01:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:19.156 01:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.413 Initializing NVMe Controllers 00:37:19.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:19.413 Controller IO queue size 128, less than required. 00:37:19.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:19.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:19.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:19.413 Initialization complete. Launching workers. 00:37:19.414 ======================================================== 00:37:19.414 Latency(us) 00:37:19.414 Device Information : IOPS MiB/s Average min max 00:37:19.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004670.84 1000314.51 1012959.43 00:37:19.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004189.71 1000283.90 1012515.53 00:37:19.414 ======================================================== 00:37:19.414 Total : 256.00 0.12 1004430.27 1000283.90 1012959.43 00:37:19.414 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077910 00:37:19.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1077910) - No such process 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1077910 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:19.671 rmmod nvme_tcp 00:37:19.671 rmmod nvme_fabrics 00:37:19.671 rmmod nvme_keyring 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 1077366 ']' 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 1077366 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1077366 ']' 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1077366 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1077366 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:19.671 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1077366' 00:37:19.672 killing process with pid 1077366 00:37:19.672 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1077366 00:37:19.672 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1077366 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.930 01:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.462 00:37:22.462 real 0m12.164s 00:37:22.462 user 0m24.219s 00:37:22.462 sys 0m3.715s 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:22.462 ************************************ 00:37:22.462 END TEST nvmf_delete_subsystem 00:37:22.462 ************************************ 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.462 ************************************ 00:37:22.462 START TEST nvmf_host_management 00:37:22.462 ************************************ 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:22.462 * Looking for test storage... 00:37:22.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.462 --rc genhtml_branch_coverage=1 00:37:22.462 --rc genhtml_function_coverage=1 00:37:22.462 --rc genhtml_legend=1 00:37:22.462 --rc geninfo_all_blocks=1 00:37:22.462 --rc geninfo_unexecuted_blocks=1 00:37:22.462 00:37:22.462 ' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.462 --rc genhtml_branch_coverage=1 00:37:22.462 --rc genhtml_function_coverage=1 00:37:22.462 --rc genhtml_legend=1 00:37:22.462 --rc geninfo_all_blocks=1 00:37:22.462 --rc geninfo_unexecuted_blocks=1 00:37:22.462 00:37:22.462 ' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.462 --rc genhtml_branch_coverage=1 00:37:22.462 --rc genhtml_function_coverage=1 00:37:22.462 --rc genhtml_legend=1 00:37:22.462 --rc geninfo_all_blocks=1 00:37:22.462 --rc geninfo_unexecuted_blocks=1 00:37:22.462 00:37:22.462 ' 00:37:22.462 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:22.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.463 --rc genhtml_branch_coverage=1 00:37:22.463 --rc genhtml_function_coverage=1 00:37:22.463 --rc genhtml_legend=1 00:37:22.463 --rc geninfo_all_blocks=1 00:37:22.463 --rc geninfo_unexecuted_blocks=1 00:37:22.463 00:37:22.463 ' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.463 01:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.366 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:24.367 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:24.367 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:24.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:24.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:24.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:37:24.367 00:37:24.367 --- 10.0.0.2 ping statistics --- 00:37:24.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.367 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:37:24.367 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:37:24.368 00:37:24.368 --- 10.0.0.1 ping statistics --- 00:37:24.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.368 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=1080258 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 1080258 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1080258 ']' 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:24.368 01:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.368 [2024-10-01 01:55:03.966426] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:24.368 [2024-10-01 01:55:03.967494] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:24.368 [2024-10-01 01:55:03.967561] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.368 [2024-10-01 01:55:04.032624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:24.368 [2024-10-01 01:55:04.118658] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.368 [2024-10-01 01:55:04.118712] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.368 [2024-10-01 01:55:04.118742] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.368 [2024-10-01 01:55:04.118754] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.368 [2024-10-01 01:55:04.118764] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.368 [2024-10-01 01:55:04.118855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.368 [2024-10-01 01:55:04.118917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:24.368 [2024-10-01 01:55:04.118950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:24.368 [2024-10-01 01:55:04.118952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.368 [2024-10-01 01:55:04.219077] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:24.368 [2024-10-01 01:55:04.219333] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:24.368 [2024-10-01 01:55:04.219578] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:24.626 [2024-10-01 01:55:04.220170] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:24.626 [2024-10-01 01:55:04.220416] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.626 [2024-10-01 01:55:04.267744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.626 Malloc0 00:37:24.626 [2024-10-01 01:55:04.327864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1080302 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1080302 /var/tmp/bdevperf.sock 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1080302 ']' 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:24.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:24.626 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:24.627 { 00:37:24.627 "params": { 00:37:24.627 "name": "Nvme$subsystem", 00:37:24.627 "trtype": "$TEST_TRANSPORT", 00:37:24.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.627 "adrfam": "ipv4", 00:37:24.627 "trsvcid": "$NVMF_PORT", 00:37:24.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.627 "hdgst": ${hdgst:-false}, 00:37:24.627 "ddgst": ${ddgst:-false} 00:37:24.627 }, 00:37:24.627 "method": "bdev_nvme_attach_controller" 00:37:24.627 } 00:37:24.627 EOF 00:37:24.627 )") 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:24.627 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:24.627 "params": { 00:37:24.627 "name": "Nvme0", 00:37:24.627 "trtype": "tcp", 00:37:24.627 "traddr": "10.0.0.2", 00:37:24.627 "adrfam": "ipv4", 00:37:24.627 "trsvcid": "4420", 00:37:24.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.627 "hdgst": false, 00:37:24.627 "ddgst": false 00:37:24.627 }, 00:37:24.627 "method": "bdev_nvme_attach_controller" 00:37:24.627 }' 00:37:24.627 [2024-10-01 01:55:04.404340] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:24.627 [2024-10-01 01:55:04.404418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080302 ] 00:37:24.627 [2024-10-01 01:55:04.466698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.885 [2024-10-01 01:55:04.555119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.143 Running I/O for 10 seconds... 00:37:25.143 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:25.143 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:25.143 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:25.143 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.143 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:25.144 01:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.402 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:25.663 [2024-10-01 01:55:05.257217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.663 [2024-10-01 01:55:05.257835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.257995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.258018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.258030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.258042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc01b0 is same with the state(6) to be set 00:37:25.664 [2024-10-01 01:55:05.258161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.258971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.258985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.664 [2024-10-01 01:55:05.259012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.664 [2024-10-01 01:55:05.259031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.259972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.259987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.260011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.260031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.260046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.260062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.260076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.260091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.260105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.260121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.260135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.260150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.665 [2024-10-01 01:55:05.260164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.665 [2024-10-01 01:55:05.260179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9eae10 is same with the state(6) to be set 00:37:25.666 [2024-10-01 01:55:05.260252] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9eae10 was disconnected and freed. reset controller. 00:37:25.666 [2024-10-01 01:55:05.260335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:25.666 [2024-10-01 01:55:05.260362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.666 [2024-10-01 01:55:05.260378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:25.666 [2024-10-01 01:55:05.260391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.666 [2024-10-01 01:55:05.260405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:25.666 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.666 [2024-10-01 01:55:05.260418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.666 [2024-10-01 01:55:05.260434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:25.666 [2024-10-01 01:55:05.260447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:25.666 [2024-10-01 01:55:05.260460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d2090 is same with the state(6) to be set 00:37:25.666 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:25.666 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.666 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:25.666 [2024-10-01 01:55:05.261624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:25.666 task offset: 73728 on job bdev=Nvme0n1 fails 00:37:25.666 00:37:25.666 Latency(us) 00:37:25.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.666 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:25.666 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:25.666 Verification LBA range: start 0x0 length 0x400 00:37:25.666 Nvme0n1 : 0.41 1413.81 88.36 157.09 0.00 39594.42 6068.15 35146.71 00:37:25.666 =================================================================================================================== 00:37:25.666 Total : 1413.81 88.36 157.09 0.00 39594.42 6068.15 35146.71 00:37:25.666 [2024-10-01 01:55:05.263799] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:25.666 [2024-10-01 01:55:05.263828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2090 (9): Bad file descriptor 00:37:25.666 [2024-10-01 01:55:05.267870] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:25.666 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.666 01:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1080302 00:37:26.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1080302) - No such process 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:26.601 { 00:37:26.601 "params": { 00:37:26.601 "name": "Nvme$subsystem", 00:37:26.601 "trtype": "$TEST_TRANSPORT", 00:37:26.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.601 "adrfam": "ipv4", 00:37:26.601 "trsvcid": "$NVMF_PORT", 00:37:26.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.601 "hdgst": ${hdgst:-false}, 00:37:26.601 "ddgst": ${ddgst:-false} 00:37:26.601 }, 00:37:26.601 "method": "bdev_nvme_attach_controller" 00:37:26.601 } 00:37:26.601 EOF 00:37:26.601 )") 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:37:26.601 01:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:26.601 "params": { 00:37:26.601 "name": "Nvme0", 00:37:26.601 "trtype": "tcp", 00:37:26.601 "traddr": "10.0.0.2", 00:37:26.601 "adrfam": "ipv4", 00:37:26.601 "trsvcid": "4420", 00:37:26.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.601 "hdgst": false, 00:37:26.601 "ddgst": false 00:37:26.601 }, 00:37:26.601 "method": "bdev_nvme_attach_controller" 00:37:26.601 }' 00:37:26.601 [2024-10-01 01:55:06.317267] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:26.601 [2024-10-01 01:55:06.317366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080577 ] 00:37:26.601 [2024-10-01 01:55:06.379033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.859 [2024-10-01 01:55:06.465212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.117 Running I/O for 1 seconds... 00:37:28.050 1536.00 IOPS, 96.00 MiB/s 00:37:28.050 Latency(us) 00:37:28.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:28.050 Verification LBA range: start 0x0 length 0x400 00:37:28.050 Nvme0n1 : 1.01 1584.97 99.06 0.00 0.00 39733.54 7815.77 33981.63 00:37:28.051 =================================================================================================================== 00:37:28.051 Total : 1584.97 99.06 0.00 0.00 39733.54 7815.77 33981.63 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:28.308 01:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:28.308 rmmod nvme_tcp 00:37:28.308 rmmod nvme_fabrics 00:37:28.308 rmmod nvme_keyring 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 1080258 ']' 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 1080258 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1080258 ']' 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1080258 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1080258 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1080258' 00:37:28.308 killing process with pid 1080258 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1080258 00:37:28.308 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1080258 00:37:28.567 [2024-10-01 01:55:08.298808] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.567 01:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:31.131 00:37:31.131 real 0m8.599s 00:37:31.131 user 0m17.567s 00:37:31.131 sys 0m3.655s 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.131 ************************************ 00:37:31.131 END TEST nvmf_host_management 00:37:31.131 ************************************ 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:31.131 ************************************ 00:37:31.131 START TEST nvmf_lvol 00:37:31.131 ************************************ 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:31.131 * Looking for test storage... 00:37:31.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.131 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:31.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.132 --rc genhtml_branch_coverage=1 00:37:31.132 --rc genhtml_function_coverage=1 00:37:31.132 --rc genhtml_legend=1 00:37:31.132 --rc geninfo_all_blocks=1 00:37:31.132 --rc geninfo_unexecuted_blocks=1 00:37:31.132 00:37:31.132 ' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:31.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.132 --rc genhtml_branch_coverage=1 00:37:31.132 --rc genhtml_function_coverage=1 00:37:31.132 --rc genhtml_legend=1 00:37:31.132 --rc geninfo_all_blocks=1 00:37:31.132 --rc geninfo_unexecuted_blocks=1 00:37:31.132 00:37:31.132 ' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:31.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.132 --rc genhtml_branch_coverage=1 00:37:31.132 --rc genhtml_function_coverage=1 00:37:31.132 --rc genhtml_legend=1 00:37:31.132 --rc geninfo_all_blocks=1 00:37:31.132 --rc geninfo_unexecuted_blocks=1 00:37:31.132 00:37:31.132 ' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:31.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.132 --rc genhtml_branch_coverage=1 00:37:31.132 --rc genhtml_function_coverage=1 00:37:31.132 --rc genhtml_legend=1 00:37:31.132 --rc geninfo_all_blocks=1 00:37:31.132 --rc geninfo_unexecuted_blocks=1 00:37:31.132 00:37:31.132 ' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:31.132 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:31.133 01:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:33.069 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:33.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:33.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:33.070 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:33.070 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:37:33.070 00:37:33.070 --- 10.0.0.2 ping statistics --- 00:37:33.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.070 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:37:33.070 00:37:33.070 --- 10.0.0.1 ping statistics --- 00:37:33.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.070 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=1082778 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 1082778 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1082778 ']' 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:33.070 01:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:33.071 [2024-10-01 01:55:12.807040] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:33.071 [2024-10-01 01:55:12.808096] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:33.071 [2024-10-01 01:55:12.808164] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.071 [2024-10-01 01:55:12.872043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:33.329 [2024-10-01 01:55:12.959110] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.329 [2024-10-01 01:55:12.959164] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.329 [2024-10-01 01:55:12.959194] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.329 [2024-10-01 01:55:12.959207] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.329 [2024-10-01 01:55:12.959218] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.329 [2024-10-01 01:55:12.959306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.329 [2024-10-01 01:55:12.959336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.329 [2024-10-01 01:55:12.959338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.329 [2024-10-01 01:55:13.061359] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:33.329 [2024-10-01 01:55:13.061620] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:33.329 [2024-10-01 01:55:13.061634] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:33.329 [2024-10-01 01:55:13.061898] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.329 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:33.588 [2024-10-01 01:55:13.352058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.588 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:33.846 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:33.846 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:34.105 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:34.105 01:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:34.672 01:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:34.931 01:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=130542cd-89ac-488c-80de-82f681fa226a 00:37:34.931 01:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 130542cd-89ac-488c-80de-82f681fa226a lvol 20 00:37:35.189 01:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=286b6746-f0d5-4d39-a60c-da735f339fcb 00:37:35.189 01:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:35.447 01:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 286b6746-f0d5-4d39-a60c-da735f339fcb 00:37:35.705 01:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:35.962 [2024-10-01 01:55:15.604251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.962 01:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:36.220 01:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1083091 00:37:36.220 01:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:36.220 01:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:37.154 01:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 286b6746-f0d5-4d39-a60c-da735f339fcb MY_SNAPSHOT 00:37:37.412 01:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9a30664b-9326-4133-8b3a-f5ad950b3dd1 00:37:37.412 01:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 286b6746-f0d5-4d39-a60c-da735f339fcb 30 00:37:37.670 01:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9a30664b-9326-4133-8b3a-f5ad950b3dd1 MY_CLONE 00:37:38.235 01:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b84f839f-7b70-4a06-9761-a413001ebaa6 00:37:38.235 01:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b84f839f-7b70-4a06-9761-a413001ebaa6 00:37:38.801 01:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1083091 00:37:46.909 Initializing NVMe Controllers 00:37:46.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:46.909 Controller IO queue size 128, less than required. 00:37:46.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:46.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:46.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:46.909 Initialization complete. Launching workers. 00:37:46.909 ======================================================== 00:37:46.909 Latency(us) 00:37:46.909 Device Information : IOPS MiB/s Average min max 00:37:46.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10437.70 40.77 12263.72 5904.09 72584.59 00:37:46.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10323.80 40.33 12398.56 5977.41 71399.53 00:37:46.909 ======================================================== 00:37:46.909 Total : 20761.50 81.10 12330.77 5904.09 72584.59 00:37:46.909 00:37:46.909 01:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.909 01:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 286b6746-f0d5-4d39-a60c-da735f339fcb 00:37:47.166 01:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 130542cd-89ac-488c-80de-82f681fa226a 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.423 rmmod nvme_tcp 00:37:47.423 rmmod nvme_fabrics 00:37:47.423 rmmod nvme_keyring 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 1082778 ']' 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 1082778 00:37:47.423 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1082778 ']' 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1082778 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1082778 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1082778' 00:37:47.424 killing process with pid 1082778 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1082778 00:37:47.424 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1082778 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.682 01:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.217 00:37:50.217 real 0m19.076s 00:37:50.217 user 0m55.255s 00:37:50.217 sys 0m8.062s 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:50.217 ************************************ 00:37:50.217 END TEST nvmf_lvol 00:37:50.217 ************************************ 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:50.217 ************************************ 00:37:50.217 START TEST nvmf_lvs_grow 00:37:50.217 ************************************ 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:50.217 * Looking for test storage... 00:37:50.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.217 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:50.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.218 --rc genhtml_branch_coverage=1 00:37:50.218 --rc genhtml_function_coverage=1 00:37:50.218 --rc genhtml_legend=1 00:37:50.218 --rc geninfo_all_blocks=1 00:37:50.218 --rc geninfo_unexecuted_blocks=1 00:37:50.218 00:37:50.218 ' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:50.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.218 --rc genhtml_branch_coverage=1 00:37:50.218 --rc genhtml_function_coverage=1 00:37:50.218 --rc genhtml_legend=1 00:37:50.218 --rc geninfo_all_blocks=1 00:37:50.218 --rc geninfo_unexecuted_blocks=1 00:37:50.218 00:37:50.218 ' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:50.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.218 --rc genhtml_branch_coverage=1 00:37:50.218 --rc genhtml_function_coverage=1 00:37:50.218 --rc genhtml_legend=1 00:37:50.218 --rc geninfo_all_blocks=1 00:37:50.218 --rc geninfo_unexecuted_blocks=1 00:37:50.218 00:37:50.218 ' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:50.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.218 --rc genhtml_branch_coverage=1 00:37:50.218 --rc genhtml_function_coverage=1 00:37:50.218 --rc genhtml_legend=1 00:37:50.218 --rc geninfo_all_blocks=1 00:37:50.218 --rc geninfo_unexecuted_blocks=1 00:37:50.218 00:37:50.218 ' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.218 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.219 01:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:52.122 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:52.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:52.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:52.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:52.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:52.123 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:52.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:52.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:37:52.123 00:37:52.124 --- 10.0.0.2 ping statistics --- 00:37:52.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.124 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:52.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:52.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:37:52.124 00:37:52.124 --- 10.0.0.1 ping statistics --- 00:37:52.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.124 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=1086453 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 1086453 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1086453 ']' 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:52.124 01:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:52.383 [2024-10-01 01:55:31.979238] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:52.383 [2024-10-01 01:55:31.980405] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:52.383 [2024-10-01 01:55:31.980470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.383 [2024-10-01 01:55:32.045672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.383 [2024-10-01 01:55:32.137087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:52.383 [2024-10-01 01:55:32.137150] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:52.383 [2024-10-01 01:55:32.137179] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:52.383 [2024-10-01 01:55:32.137191] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:52.383 [2024-10-01 01:55:32.137201] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:52.383 [2024-10-01 01:55:32.137231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.383 [2024-10-01 01:55:32.229681] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:52.383 [2024-10-01 01:55:32.230034] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:52.642 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:52.911 [2024-10-01 01:55:32.521828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:52.911 ************************************ 00:37:52.911 START TEST lvs_grow_clean 00:37:52.911 ************************************ 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:52.911 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:52.912 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:52.912 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:52.912 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:52.912 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:52.912 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:53.171 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:53.171 01:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:53.429 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:37:53.429 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:37:53.430 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:53.688 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:53.688 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:53.688 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d1f58cc0-f2d4-464b-87df-5b8712d8124d lvol 150 00:37:53.947 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f7073879-6a50-4592-9391-a75efcb15bc2 00:37:53.947 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:53.947 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:54.206 [2024-10-01 01:55:33.953744] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:54.206 [2024-10-01 01:55:33.953857] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:54.206 true 00:37:54.206 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:37:54.206 01:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:54.465 01:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:54.465 01:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:54.724 01:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7073879-6a50-4592-9391-a75efcb15bc2 00:37:54.982 01:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:55.241 [2024-10-01 01:55:35.050068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.241 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1086882 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1086882 /var/tmp/bdevperf.sock 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1086882 ']' 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:55.499 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:55.500 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:55.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:55.500 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:55.500 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:55.758 [2024-10-01 01:55:35.381174] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:37:55.758 [2024-10-01 01:55:35.381276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086882 ] 00:37:55.758 [2024-10-01 01:55:35.443478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.758 [2024-10-01 01:55:35.535717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.017 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:56.017 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:56.017 01:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:56.276 Nvme0n1 00:37:56.276 01:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:56.535 [ 00:37:56.535 { 00:37:56.535 "name": "Nvme0n1", 00:37:56.535 "aliases": [ 00:37:56.535 "f7073879-6a50-4592-9391-a75efcb15bc2" 00:37:56.535 ], 00:37:56.535 "product_name": "NVMe disk", 00:37:56.535 "block_size": 4096, 00:37:56.535 "num_blocks": 38912, 00:37:56.535 "uuid": "f7073879-6a50-4592-9391-a75efcb15bc2", 00:37:56.535 "numa_id": 0, 00:37:56.535 "assigned_rate_limits": { 00:37:56.535 "rw_ios_per_sec": 0, 00:37:56.535 "rw_mbytes_per_sec": 0, 00:37:56.535 "r_mbytes_per_sec": 0, 00:37:56.535 "w_mbytes_per_sec": 0 00:37:56.535 }, 00:37:56.535 "claimed": false, 00:37:56.535 "zoned": false, 00:37:56.535 "supported_io_types": { 00:37:56.535 "read": true, 00:37:56.535 "write": true, 00:37:56.535 "unmap": true, 00:37:56.535 "flush": true, 00:37:56.535 "reset": true, 00:37:56.535 "nvme_admin": true, 00:37:56.535 "nvme_io": true, 00:37:56.536 "nvme_io_md": false, 00:37:56.536 "write_zeroes": true, 00:37:56.536 "zcopy": false, 00:37:56.536 "get_zone_info": false, 00:37:56.536 "zone_management": false, 00:37:56.536 "zone_append": false, 00:37:56.536 "compare": true, 00:37:56.536 "compare_and_write": true, 00:37:56.536 "abort": true, 00:37:56.536 "seek_hole": false, 00:37:56.536 "seek_data": false, 00:37:56.536 "copy": true, 00:37:56.536 "nvme_iov_md": false 00:37:56.536 }, 00:37:56.536 "memory_domains": [ 00:37:56.536 { 00:37:56.536 "dma_device_id": "system", 00:37:56.536 "dma_device_type": 1 00:37:56.536 } 00:37:56.536 ], 00:37:56.536 "driver_specific": { 00:37:56.536 "nvme": [ 00:37:56.536 { 00:37:56.536 "trid": { 00:37:56.536 "trtype": "TCP", 00:37:56.536 "adrfam": "IPv4", 00:37:56.536 "traddr": "10.0.0.2", 00:37:56.536 "trsvcid": "4420", 00:37:56.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:56.536 }, 00:37:56.536 "ctrlr_data": { 00:37:56.536 "cntlid": 1, 00:37:56.536 "vendor_id": "0x8086", 00:37:56.536 "model_number": "SPDK bdev Controller", 00:37:56.536 "serial_number": "SPDK0", 00:37:56.536 "firmware_revision": "25.01", 00:37:56.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:56.536 "oacs": { 00:37:56.536 "security": 0, 00:37:56.536 "format": 0, 00:37:56.536 "firmware": 0, 00:37:56.536 "ns_manage": 0 00:37:56.536 }, 00:37:56.536 "multi_ctrlr": true, 00:37:56.536 "ana_reporting": false 00:37:56.536 }, 00:37:56.536 "vs": { 00:37:56.536 "nvme_version": "1.3" 00:37:56.536 }, 00:37:56.536 "ns_data": { 00:37:56.536 "id": 1, 00:37:56.536 "can_share": true 00:37:56.536 } 00:37:56.536 } 00:37:56.536 ], 00:37:56.536 "mp_policy": "active_passive" 00:37:56.536 } 00:37:56.536 } 00:37:56.536 ] 00:37:56.536 01:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1086908 00:37:56.536 01:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:56.536 01:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:56.794 Running I/O for 10 seconds... 00:37:57.730 Latency(us) 00:37:57.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:57.730 Nvme0n1 : 1.00 13885.00 54.24 0.00 0.00 0.00 0.00 0.00 00:37:57.730 =================================================================================================================== 00:37:57.730 Total : 13885.00 54.24 0.00 0.00 0.00 0.00 0.00 00:37:57.730 00:37:58.666 01:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:37:58.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.666 Nvme0n1 : 2.00 14042.50 54.85 0.00 0.00 0.00 0.00 0.00 00:37:58.666 =================================================================================================================== 00:37:58.666 Total : 14042.50 54.85 0.00 0.00 0.00 0.00 0.00 00:37:58.666 00:37:58.924 true 00:37:58.924 01:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:37:58.924 01:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:59.184 01:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:59.184 01:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:59.184 01:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1086908 00:37:59.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.750 Nvme0n1 : 3.00 14391.67 56.22 0.00 0.00 0.00 0.00 0.00 00:37:59.750 =================================================================================================================== 00:37:59.750 Total : 14391.67 56.22 0.00 0.00 0.00 0.00 0.00 00:37:59.750 00:38:00.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.682 Nvme0n1 : 4.00 14423.50 56.34 0.00 0.00 0.00 0.00 0.00 00:38:00.682 =================================================================================================================== 00:38:00.682 Total : 14423.50 56.34 0.00 0.00 0.00 0.00 0.00 00:38:00.682 00:38:01.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.615 Nvme0n1 : 5.00 14499.20 56.64 0.00 0.00 0.00 0.00 0.00 00:38:01.615 =================================================================================================================== 00:38:01.615 Total : 14499.20 56.64 0.00 0.00 0.00 0.00 0.00 00:38:01.615 00:38:02.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.987 Nvme0n1 : 6.00 14514.67 56.70 0.00 0.00 0.00 0.00 0.00 00:38:02.987 =================================================================================================================== 00:38:02.987 Total : 14514.67 56.70 0.00 0.00 0.00 0.00 0.00 00:38:02.987 00:38:03.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.920 Nvme0n1 : 7.00 14470.86 56.53 0.00 0.00 0.00 0.00 0.00 00:38:03.920 =================================================================================================================== 00:38:03.920 Total : 14470.86 56.53 0.00 0.00 0.00 0.00 0.00 00:38:03.920 00:38:04.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.853 Nvme0n1 : 8.00 14445.88 56.43 0.00 0.00 0.00 0.00 0.00 00:38:04.853 =================================================================================================================== 00:38:04.853 Total : 14445.88 56.43 0.00 0.00 0.00 0.00 0.00 00:38:04.853 00:38:05.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.786 Nvme0n1 : 9.00 14426.67 56.35 0.00 0.00 0.00 0.00 0.00 00:38:05.786 =================================================================================================================== 00:38:05.786 Total : 14426.67 56.35 0.00 0.00 0.00 0.00 0.00 00:38:05.786 00:38:06.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.719 Nvme0n1 : 10.00 14411.10 56.29 0.00 0.00 0.00 0.00 0.00 00:38:06.719 =================================================================================================================== 00:38:06.719 Total : 14411.10 56.29 0.00 0.00 0.00 0.00 0.00 00:38:06.719 00:38:06.719 00:38:06.719 Latency(us) 00:38:06.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.719 Nvme0n1 : 10.01 14412.28 56.30 0.00 0.00 8875.11 5218.61 19709.35 00:38:06.719 =================================================================================================================== 00:38:06.719 Total : 14412.28 56.30 0.00 0.00 8875.11 5218.61 19709.35 00:38:06.719 { 00:38:06.719 "results": [ 00:38:06.719 { 00:38:06.719 "job": "Nvme0n1", 00:38:06.719 "core_mask": "0x2", 00:38:06.719 "workload": "randwrite", 00:38:06.719 "status": "finished", 00:38:06.719 "queue_depth": 128, 00:38:06.719 "io_size": 4096, 00:38:06.719 "runtime": 10.008063, 00:38:06.719 "iops": 14412.279379136602, 00:38:06.719 "mibps": 56.29796632475235, 00:38:06.719 "io_failed": 0, 00:38:06.719 "io_timeout": 0, 00:38:06.719 "avg_latency_us": 8875.107834697195, 00:38:06.719 "min_latency_us": 5218.607407407408, 00:38:06.719 "max_latency_us": 19709.345185185186 00:38:06.719 } 00:38:06.719 ], 00:38:06.719 "core_count": 1 00:38:06.719 } 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1086882 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1086882 ']' 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1086882 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1086882 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1086882' 00:38:06.719 killing process with pid 1086882 00:38:06.719 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1086882 00:38:06.719 Received shutdown signal, test time was about 10.000000 seconds 00:38:06.719 00:38:06.719 Latency(us) 00:38:06.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.720 =================================================================================================================== 00:38:06.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:06.720 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1086882 00:38:06.978 01:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:07.236 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:07.495 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:07.495 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:07.753 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:07.754 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:07.754 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:08.012 [2024-10-01 01:55:47.841794] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:08.271 01:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:08.529 request: 00:38:08.529 { 00:38:08.529 "uuid": "d1f58cc0-f2d4-464b-87df-5b8712d8124d", 00:38:08.529 "method": "bdev_lvol_get_lvstores", 00:38:08.529 "req_id": 1 00:38:08.529 } 00:38:08.529 Got JSON-RPC error response 00:38:08.529 response: 00:38:08.529 { 00:38:08.529 "code": -19, 00:38:08.529 "message": "No such device" 00:38:08.529 } 00:38:08.529 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:38:08.529 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:08.530 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:08.530 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:08.530 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:08.788 aio_bdev 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f7073879-6a50-4592-9391-a75efcb15bc2 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f7073879-6a50-4592-9391-a75efcb15bc2 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:08.788 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:09.064 01:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f7073879-6a50-4592-9391-a75efcb15bc2 -t 2000 00:38:09.347 [ 00:38:09.347 { 00:38:09.347 "name": "f7073879-6a50-4592-9391-a75efcb15bc2", 00:38:09.347 "aliases": [ 00:38:09.347 "lvs/lvol" 00:38:09.347 ], 00:38:09.347 "product_name": "Logical Volume", 00:38:09.347 "block_size": 4096, 00:38:09.347 "num_blocks": 38912, 00:38:09.347 "uuid": "f7073879-6a50-4592-9391-a75efcb15bc2", 00:38:09.347 "assigned_rate_limits": { 00:38:09.347 "rw_ios_per_sec": 0, 00:38:09.347 "rw_mbytes_per_sec": 0, 00:38:09.348 "r_mbytes_per_sec": 0, 00:38:09.348 "w_mbytes_per_sec": 0 00:38:09.348 }, 00:38:09.348 "claimed": false, 00:38:09.348 "zoned": false, 00:38:09.348 "supported_io_types": { 00:38:09.348 "read": true, 00:38:09.348 "write": true, 00:38:09.348 "unmap": true, 00:38:09.348 "flush": false, 00:38:09.348 "reset": true, 00:38:09.348 "nvme_admin": false, 00:38:09.348 "nvme_io": false, 00:38:09.348 "nvme_io_md": false, 00:38:09.348 "write_zeroes": true, 00:38:09.348 "zcopy": false, 00:38:09.348 "get_zone_info": false, 00:38:09.348 "zone_management": false, 00:38:09.348 "zone_append": false, 00:38:09.348 "compare": false, 00:38:09.348 "compare_and_write": false, 00:38:09.348 "abort": false, 00:38:09.348 "seek_hole": true, 00:38:09.348 "seek_data": true, 00:38:09.348 "copy": false, 00:38:09.348 "nvme_iov_md": false 00:38:09.348 }, 00:38:09.348 "driver_specific": { 00:38:09.348 "lvol": { 00:38:09.348 "lvol_store_uuid": "d1f58cc0-f2d4-464b-87df-5b8712d8124d", 00:38:09.348 "base_bdev": "aio_bdev", 00:38:09.348 "thin_provision": false, 00:38:09.348 "num_allocated_clusters": 38, 00:38:09.348 "snapshot": false, 00:38:09.348 "clone": false, 00:38:09.348 "esnap_clone": false 00:38:09.348 } 00:38:09.348 } 00:38:09.348 } 00:38:09.348 ] 00:38:09.348 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:38:09.348 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:09.348 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:09.622 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:09.622 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:09.622 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:09.880 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:09.880 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7073879-6a50-4592-9391-a75efcb15bc2 00:38:10.137 01:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1f58cc0-f2d4-464b-87df-5b8712d8124d 00:38:10.396 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:10.654 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.654 00:38:10.654 real 0m17.901s 00:38:10.654 user 0m17.365s 00:38:10.654 sys 0m1.860s 00:38:10.654 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:10.654 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:10.654 ************************************ 00:38:10.654 END TEST lvs_grow_clean 00:38:10.654 ************************************ 00:38:10.654 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:10.655 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:10.655 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:10.655 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:10.913 ************************************ 00:38:10.913 START TEST lvs_grow_dirty 00:38:10.913 ************************************ 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.913 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:11.171 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:11.171 01:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:11.429 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=68112e32-e362-4198-a546-85f5358b3721 00:38:11.429 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:11.429 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:11.688 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:11.688 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:11.688 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68112e32-e362-4198-a546-85f5358b3721 lvol 150 00:38:11.948 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3ac2fafb-8906-4177-b814-de1f47005135 00:38:11.948 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:11.948 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:12.206 [2024-10-01 01:55:51.945745] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:12.206 [2024-10-01 01:55:51.945858] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:12.206 true 00:38:12.206 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:12.206 01:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:12.465 01:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:12.465 01:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:12.723 01:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ac2fafb-8906-4177-b814-de1f47005135 00:38:12.981 01:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:13.240 [2024-10-01 01:55:53.042034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.240 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1088928 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1088928 /var/tmp/bdevperf.sock 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1088928 ']' 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.500 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:13.759 [2024-10-01 01:55:53.372726] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:13.759 [2024-10-01 01:55:53.372829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088928 ] 00:38:13.759 [2024-10-01 01:55:53.437316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.759 [2024-10-01 01:55:53.529794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.017 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:14.017 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:14.017 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:14.276 Nvme0n1 00:38:14.276 01:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:14.534 [ 00:38:14.534 { 00:38:14.534 "name": "Nvme0n1", 00:38:14.534 "aliases": [ 00:38:14.534 "3ac2fafb-8906-4177-b814-de1f47005135" 00:38:14.534 ], 00:38:14.534 "product_name": "NVMe disk", 00:38:14.534 "block_size": 4096, 00:38:14.534 "num_blocks": 38912, 00:38:14.534 "uuid": "3ac2fafb-8906-4177-b814-de1f47005135", 00:38:14.534 "numa_id": 0, 00:38:14.534 "assigned_rate_limits": { 00:38:14.534 "rw_ios_per_sec": 0, 00:38:14.534 "rw_mbytes_per_sec": 0, 00:38:14.534 "r_mbytes_per_sec": 0, 00:38:14.534 "w_mbytes_per_sec": 0 00:38:14.534 }, 00:38:14.534 "claimed": false, 00:38:14.534 "zoned": false, 00:38:14.534 "supported_io_types": { 00:38:14.534 "read": true, 00:38:14.534 "write": true, 00:38:14.534 "unmap": true, 00:38:14.534 "flush": true, 00:38:14.534 "reset": true, 00:38:14.534 "nvme_admin": true, 00:38:14.534 "nvme_io": true, 00:38:14.534 "nvme_io_md": false, 00:38:14.534 "write_zeroes": true, 00:38:14.534 "zcopy": false, 00:38:14.534 "get_zone_info": false, 00:38:14.534 "zone_management": false, 00:38:14.534 "zone_append": false, 00:38:14.534 "compare": true, 00:38:14.534 "compare_and_write": true, 00:38:14.534 "abort": true, 00:38:14.534 "seek_hole": false, 00:38:14.534 "seek_data": false, 00:38:14.534 "copy": true, 00:38:14.534 "nvme_iov_md": false 00:38:14.534 }, 00:38:14.535 "memory_domains": [ 00:38:14.535 { 00:38:14.535 "dma_device_id": "system", 00:38:14.535 "dma_device_type": 1 00:38:14.535 } 00:38:14.535 ], 00:38:14.535 "driver_specific": { 00:38:14.535 "nvme": [ 00:38:14.535 { 00:38:14.535 "trid": { 00:38:14.535 "trtype": "TCP", 00:38:14.535 "adrfam": "IPv4", 00:38:14.535 "traddr": "10.0.0.2", 00:38:14.535 "trsvcid": "4420", 00:38:14.535 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:14.535 }, 00:38:14.535 "ctrlr_data": { 00:38:14.535 "cntlid": 1, 00:38:14.535 "vendor_id": "0x8086", 00:38:14.535 "model_number": "SPDK bdev Controller", 00:38:14.535 "serial_number": "SPDK0", 00:38:14.535 "firmware_revision": "25.01", 00:38:14.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.535 "oacs": { 00:38:14.535 "security": 0, 00:38:14.535 "format": 0, 00:38:14.535 "firmware": 0, 00:38:14.535 "ns_manage": 0 00:38:14.535 }, 00:38:14.535 "multi_ctrlr": true, 00:38:14.535 "ana_reporting": false 00:38:14.535 }, 00:38:14.535 "vs": { 00:38:14.535 "nvme_version": "1.3" 00:38:14.535 }, 00:38:14.535 "ns_data": { 00:38:14.535 "id": 1, 00:38:14.535 "can_share": true 00:38:14.535 } 00:38:14.535 } 00:38:14.535 ], 00:38:14.535 "mp_policy": "active_passive" 00:38:14.535 } 00:38:14.535 } 00:38:14.535 ] 00:38:14.535 01:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1089063 00:38:14.535 01:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:14.535 01:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:14.535 Running I/O for 10 seconds... 00:38:15.910 Latency(us) 00:38:15.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.910 Nvme0n1 : 1.00 13758.00 53.74 0.00 0.00 0.00 0.00 0.00 00:38:15.910 =================================================================================================================== 00:38:15.910 Total : 13758.00 53.74 0.00 0.00 0.00 0.00 0.00 00:38:15.910 00:38:16.476 01:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 68112e32-e362-4198-a546-85f5358b3721 00:38:16.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.735 Nvme0n1 : 2.00 13916.00 54.36 0.00 0.00 0.00 0.00 0.00 00:38:16.735 =================================================================================================================== 00:38:16.735 Total : 13916.00 54.36 0.00 0.00 0.00 0.00 0.00 00:38:16.735 00:38:16.735 true 00:38:16.735 01:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:16.735 01:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:17.302 01:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:17.302 01:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:17.302 01:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1089063 00:38:17.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.561 Nvme0n1 : 3.00 13989.33 54.65 0.00 0.00 0.00 0.00 0.00 00:38:17.561 =================================================================================================================== 00:38:17.561 Total : 13989.33 54.65 0.00 0.00 0.00 0.00 0.00 00:38:17.561 00:38:18.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.936 Nvme0n1 : 4.00 14084.25 55.02 0.00 0.00 0.00 0.00 0.00 00:38:18.936 =================================================================================================================== 00:38:18.936 Total : 14084.25 55.02 0.00 0.00 0.00 0.00 0.00 00:38:18.936 00:38:19.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.880 Nvme0n1 : 5.00 14137.60 55.23 0.00 0.00 0.00 0.00 0.00 00:38:19.880 =================================================================================================================== 00:38:19.880 Total : 14137.60 55.23 0.00 0.00 0.00 0.00 0.00 00:38:19.880 00:38:20.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.811 Nvme0n1 : 6.00 14169.17 55.35 0.00 0.00 0.00 0.00 0.00 00:38:20.811 =================================================================================================================== 00:38:20.811 Total : 14169.17 55.35 0.00 0.00 0.00 0.00 0.00 00:38:20.811 00:38:21.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.795 Nvme0n1 : 7.00 14200.86 55.47 0.00 0.00 0.00 0.00 0.00 00:38:21.795 =================================================================================================================== 00:38:21.795 Total : 14200.86 55.47 0.00 0.00 0.00 0.00 0.00 00:38:21.795 00:38:22.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.728 Nvme0n1 : 8.00 14224.50 55.56 0.00 0.00 0.00 0.00 0.00 00:38:22.728 =================================================================================================================== 00:38:22.728 Total : 14224.50 55.56 0.00 0.00 0.00 0.00 0.00 00:38:22.728 00:38:23.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.662 Nvme0n1 : 9.00 14242.89 55.64 0.00 0.00 0.00 0.00 0.00 00:38:23.662 =================================================================================================================== 00:38:23.662 Total : 14242.89 55.64 0.00 0.00 0.00 0.00 0.00 00:38:23.662 00:38:24.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.596 Nvme0n1 : 10.00 14270.00 55.74 0.00 0.00 0.00 0.00 0.00 00:38:24.596 =================================================================================================================== 00:38:24.596 Total : 14270.00 55.74 0.00 0.00 0.00 0.00 0.00 00:38:24.596 00:38:24.596 00:38:24.596 Latency(us) 00:38:24.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.596 Nvme0n1 : 10.00 14275.83 55.76 0.00 0.00 8961.40 5291.43 19515.16 00:38:24.596 =================================================================================================================== 00:38:24.596 Total : 14275.83 55.76 0.00 0.00 8961.40 5291.43 19515.16 00:38:24.596 { 00:38:24.596 "results": [ 00:38:24.596 { 00:38:24.596 "job": "Nvme0n1", 00:38:24.596 "core_mask": "0x2", 00:38:24.596 "workload": "randwrite", 00:38:24.596 "status": "finished", 00:38:24.596 "queue_depth": 128, 00:38:24.596 "io_size": 4096, 00:38:24.596 "runtime": 10.004882, 00:38:24.596 "iops": 14275.830539530602, 00:38:24.596 "mibps": 55.76496304504141, 00:38:24.596 "io_failed": 0, 00:38:24.596 "io_timeout": 0, 00:38:24.596 "avg_latency_us": 8961.399107343825, 00:38:24.596 "min_latency_us": 5291.425185185185, 00:38:24.596 "max_latency_us": 19515.164444444443 00:38:24.596 } 00:38:24.596 ], 00:38:24.596 "core_count": 1 00:38:24.596 } 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1088928 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1088928 ']' 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1088928 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088928 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088928' 00:38:24.596 killing process with pid 1088928 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1088928 00:38:24.596 Received shutdown signal, test time was about 10.000000 seconds 00:38:24.596 00:38:24.596 Latency(us) 00:38:24.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.596 =================================================================================================================== 00:38:24.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.596 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1088928 00:38:24.854 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:25.112 01:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:25.678 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:25.678 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1086453 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1086453 00:38:25.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1086453 Killed "${NVMF_APP[@]}" "$@" 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=1090375 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 1090375 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1090375 ']' 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:25.936 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:25.936 [2024-10-01 01:56:05.624743] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:25.936 [2024-10-01 01:56:05.625765] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:25.936 [2024-10-01 01:56:05.625835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.936 [2024-10-01 01:56:05.694794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.936 [2024-10-01 01:56:05.784714] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.936 [2024-10-01 01:56:05.784785] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.936 [2024-10-01 01:56:05.784802] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:25.936 [2024-10-01 01:56:05.784816] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:25.936 [2024-10-01 01:56:05.784828] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.936 [2024-10-01 01:56:05.784859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.194 [2024-10-01 01:56:05.874571] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:26.194 [2024-10-01 01:56:05.874919] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.194 01:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:26.451 [2024-10-01 01:56:06.184059] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:26.451 [2024-10-01 01:56:06.184218] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:26.451 [2024-10-01 01:56:06.184278] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3ac2fafb-8906-4177-b814-de1f47005135 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=3ac2fafb-8906-4177-b814-de1f47005135 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:26.451 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:26.709 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3ac2fafb-8906-4177-b814-de1f47005135 -t 2000 00:38:26.967 [ 00:38:26.967 { 00:38:26.967 "name": "3ac2fafb-8906-4177-b814-de1f47005135", 00:38:26.967 "aliases": [ 00:38:26.967 "lvs/lvol" 00:38:26.967 ], 00:38:26.967 "product_name": "Logical Volume", 00:38:26.967 "block_size": 4096, 00:38:26.967 "num_blocks": 38912, 00:38:26.967 "uuid": "3ac2fafb-8906-4177-b814-de1f47005135", 00:38:26.967 "assigned_rate_limits": { 00:38:26.967 "rw_ios_per_sec": 0, 00:38:26.967 "rw_mbytes_per_sec": 0, 00:38:26.967 "r_mbytes_per_sec": 0, 00:38:26.967 "w_mbytes_per_sec": 0 00:38:26.967 }, 00:38:26.967 "claimed": false, 00:38:26.967 "zoned": false, 00:38:26.967 "supported_io_types": { 00:38:26.967 "read": true, 00:38:26.967 "write": true, 00:38:26.967 "unmap": true, 00:38:26.967 "flush": false, 00:38:26.967 "reset": true, 00:38:26.967 "nvme_admin": false, 00:38:26.967 "nvme_io": false, 00:38:26.967 "nvme_io_md": false, 00:38:26.967 "write_zeroes": true, 00:38:26.967 "zcopy": false, 00:38:26.967 "get_zone_info": false, 00:38:26.967 "zone_management": false, 00:38:26.967 "zone_append": false, 00:38:26.967 "compare": false, 00:38:26.967 "compare_and_write": false, 00:38:26.967 "abort": false, 00:38:26.967 "seek_hole": true, 00:38:26.967 "seek_data": true, 00:38:26.967 "copy": false, 00:38:26.967 "nvme_iov_md": false 00:38:26.967 }, 00:38:26.967 "driver_specific": { 00:38:26.967 "lvol": { 00:38:26.967 "lvol_store_uuid": "68112e32-e362-4198-a546-85f5358b3721", 00:38:26.967 "base_bdev": "aio_bdev", 00:38:26.967 "thin_provision": false, 00:38:26.967 "num_allocated_clusters": 38, 00:38:26.967 "snapshot": false, 00:38:26.967 "clone": false, 00:38:26.967 "esnap_clone": false 00:38:26.967 } 00:38:26.967 } 00:38:26.967 } 00:38:26.967 ] 00:38:26.967 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:26.967 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:26.967 01:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:27.225 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:27.225 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:27.225 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:27.790 [2024-10-01 01:56:07.601415] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:27.790 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:28.047 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:28.047 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:28.047 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:28.047 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:28.305 request: 00:38:28.305 { 00:38:28.305 "uuid": "68112e32-e362-4198-a546-85f5358b3721", 00:38:28.305 "method": "bdev_lvol_get_lvstores", 00:38:28.305 "req_id": 1 00:38:28.305 } 00:38:28.305 Got JSON-RPC error response 00:38:28.305 response: 00:38:28.305 { 00:38:28.305 "code": -19, 00:38:28.305 "message": "No such device" 00:38:28.305 } 00:38:28.305 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:28.305 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:28.305 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:28.305 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:28.305 01:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:28.563 aio_bdev 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3ac2fafb-8906-4177-b814-de1f47005135 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=3ac2fafb-8906-4177-b814-de1f47005135 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:28.563 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:28.820 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3ac2fafb-8906-4177-b814-de1f47005135 -t 2000 00:38:29.078 [ 00:38:29.078 { 00:38:29.078 "name": "3ac2fafb-8906-4177-b814-de1f47005135", 00:38:29.078 "aliases": [ 00:38:29.078 "lvs/lvol" 00:38:29.078 ], 00:38:29.078 "product_name": "Logical Volume", 00:38:29.078 "block_size": 4096, 00:38:29.078 "num_blocks": 38912, 00:38:29.078 "uuid": "3ac2fafb-8906-4177-b814-de1f47005135", 00:38:29.078 "assigned_rate_limits": { 00:38:29.078 "rw_ios_per_sec": 0, 00:38:29.078 "rw_mbytes_per_sec": 0, 00:38:29.078 "r_mbytes_per_sec": 0, 00:38:29.078 "w_mbytes_per_sec": 0 00:38:29.078 }, 00:38:29.078 "claimed": false, 00:38:29.078 "zoned": false, 00:38:29.078 "supported_io_types": { 00:38:29.078 "read": true, 00:38:29.078 "write": true, 00:38:29.078 "unmap": true, 00:38:29.078 "flush": false, 00:38:29.078 "reset": true, 00:38:29.078 "nvme_admin": false, 00:38:29.078 "nvme_io": false, 00:38:29.078 "nvme_io_md": false, 00:38:29.078 "write_zeroes": true, 00:38:29.078 "zcopy": false, 00:38:29.078 "get_zone_info": false, 00:38:29.078 "zone_management": false, 00:38:29.078 "zone_append": false, 00:38:29.078 "compare": false, 00:38:29.078 "compare_and_write": false, 00:38:29.078 "abort": false, 00:38:29.078 "seek_hole": true, 00:38:29.078 "seek_data": true, 00:38:29.078 "copy": false, 00:38:29.078 "nvme_iov_md": false 00:38:29.078 }, 00:38:29.078 "driver_specific": { 00:38:29.078 "lvol": { 00:38:29.078 "lvol_store_uuid": "68112e32-e362-4198-a546-85f5358b3721", 00:38:29.078 "base_bdev": "aio_bdev", 00:38:29.078 "thin_provision": false, 00:38:29.078 "num_allocated_clusters": 38, 00:38:29.078 "snapshot": false, 00:38:29.078 "clone": false, 00:38:29.078 "esnap_clone": false 00:38:29.078 } 00:38:29.078 } 00:38:29.078 } 00:38:29.078 ] 00:38:29.078 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:29.078 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:29.078 01:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:29.336 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:29.336 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68112e32-e362-4198-a546-85f5358b3721 00:38:29.336 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:29.592 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:29.592 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ac2fafb-8906-4177-b814-de1f47005135 00:38:29.850 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68112e32-e362-4198-a546-85f5358b3721 00:38:30.108 01:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:30.367 00:38:30.367 real 0m19.625s 00:38:30.367 user 0m36.488s 00:38:30.367 sys 0m4.825s 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:30.367 ************************************ 00:38:30.367 END TEST lvs_grow_dirty 00:38:30.367 ************************************ 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:30.367 nvmf_trace.0 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:30.367 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:30.367 rmmod nvme_tcp 00:38:30.626 rmmod nvme_fabrics 00:38:30.626 rmmod nvme_keyring 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 1090375 ']' 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 1090375 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1090375 ']' 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1090375 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1090375 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1090375' 00:38:30.626 killing process with pid 1090375 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1090375 00:38:30.626 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1090375 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:30.884 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:30.885 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.885 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:30.885 01:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:32.852 00:38:32.852 real 0m43.039s 00:38:32.852 user 0m55.694s 00:38:32.852 sys 0m8.639s 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:32.852 ************************************ 00:38:32.852 END TEST nvmf_lvs_grow 00:38:32.852 ************************************ 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:32.852 ************************************ 00:38:32.852 START TEST nvmf_bdev_io_wait 00:38:32.852 ************************************ 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:32.852 * Looking for test storage... 00:38:32.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:38:32.852 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:33.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.111 --rc genhtml_branch_coverage=1 00:38:33.111 --rc genhtml_function_coverage=1 00:38:33.111 --rc genhtml_legend=1 00:38:33.111 --rc geninfo_all_blocks=1 00:38:33.111 --rc geninfo_unexecuted_blocks=1 00:38:33.111 00:38:33.111 ' 00:38:33.111 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:33.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.111 --rc genhtml_branch_coverage=1 00:38:33.111 --rc genhtml_function_coverage=1 00:38:33.112 --rc genhtml_legend=1 00:38:33.112 --rc geninfo_all_blocks=1 00:38:33.112 --rc geninfo_unexecuted_blocks=1 00:38:33.112 00:38:33.112 ' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:33.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.112 --rc genhtml_branch_coverage=1 00:38:33.112 --rc genhtml_function_coverage=1 00:38:33.112 --rc genhtml_legend=1 00:38:33.112 --rc geninfo_all_blocks=1 00:38:33.112 --rc geninfo_unexecuted_blocks=1 00:38:33.112 00:38:33.112 ' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:33.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.112 --rc genhtml_branch_coverage=1 00:38:33.112 --rc genhtml_function_coverage=1 00:38:33.112 --rc genhtml_legend=1 00:38:33.112 --rc geninfo_all_blocks=1 00:38:33.112 --rc geninfo_unexecuted_blocks=1 00:38:33.112 00:38:33.112 ' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:33.112 01:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:35.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:35.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.018 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:35.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:35.019 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:35.019 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:35.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:38:35.278 00:38:35.278 --- 10.0.0.2 ping statistics --- 00:38:35.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.278 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:38:35.278 00:38:35.278 --- 10.0.0.1 ping statistics --- 00:38:35.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.278 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=1092906 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 1092906 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1092906 ']' 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:35.278 01:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.278 [2024-10-01 01:56:14.969689] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:35.278 [2024-10-01 01:56:14.970794] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:35.278 [2024-10-01 01:56:14.970856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:35.278 [2024-10-01 01:56:15.043439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:35.538 [2024-10-01 01:56:15.139468] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:35.538 [2024-10-01 01:56:15.139538] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:35.538 [2024-10-01 01:56:15.139552] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:35.538 [2024-10-01 01:56:15.139563] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:35.538 [2024-10-01 01:56:15.139573] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:35.538 [2024-10-01 01:56:15.139672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.538 [2024-10-01 01:56:15.139727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:35.538 [2024-10-01 01:56:15.139728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.538 [2024-10-01 01:56:15.140210] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:35.538 [2024-10-01 01:56:15.139698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 [2024-10-01 01:56:15.310729] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:35.538 [2024-10-01 01:56:15.310908] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:35.538 [2024-10-01 01:56:15.311788] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:35.538 [2024-10-01 01:56:15.312627] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 [2024-10-01 01:56:15.320414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 Malloc0 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.538 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:35.538 [2024-10-01 01:56:15.388585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.797 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.797 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1093018 00:38:35.797 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1093021 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:35.798 { 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme$subsystem", 00:38:35.798 "trtype": "$TEST_TRANSPORT", 00:38:35.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "$NVMF_PORT", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.798 "hdgst": ${hdgst:-false}, 00:38:35.798 "ddgst": ${ddgst:-false} 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 } 00:38:35.798 EOF 00:38:35.798 )") 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1093023 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:35.798 { 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme$subsystem", 00:38:35.798 "trtype": "$TEST_TRANSPORT", 00:38:35.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "$NVMF_PORT", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.798 "hdgst": ${hdgst:-false}, 00:38:35.798 "ddgst": ${ddgst:-false} 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 } 00:38:35.798 EOF 00:38:35.798 )") 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1093027 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:35.798 { 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme$subsystem", 00:38:35.798 "trtype": "$TEST_TRANSPORT", 00:38:35.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "$NVMF_PORT", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.798 "hdgst": ${hdgst:-false}, 00:38:35.798 "ddgst": ${ddgst:-false} 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 } 00:38:35.798 EOF 00:38:35.798 )") 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:35.798 { 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme$subsystem", 00:38:35.798 "trtype": "$TEST_TRANSPORT", 00:38:35.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "$NVMF_PORT", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.798 "hdgst": ${hdgst:-false}, 00:38:35.798 "ddgst": ${ddgst:-false} 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 } 00:38:35.798 EOF 00:38:35.798 )") 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1093018 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme1", 00:38:35.798 "trtype": "tcp", 00:38:35.798 "traddr": "10.0.0.2", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "4420", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.798 "hdgst": false, 00:38:35.798 "ddgst": false 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 }' 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme1", 00:38:35.798 "trtype": "tcp", 00:38:35.798 "traddr": "10.0.0.2", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "4420", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.798 "hdgst": false, 00:38:35.798 "ddgst": false 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 }' 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme1", 00:38:35.798 "trtype": "tcp", 00:38:35.798 "traddr": "10.0.0.2", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "4420", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.798 "hdgst": false, 00:38:35.798 "ddgst": false 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 }' 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:38:35.798 01:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:35.798 "params": { 00:38:35.798 "name": "Nvme1", 00:38:35.798 "trtype": "tcp", 00:38:35.798 "traddr": "10.0.0.2", 00:38:35.798 "adrfam": "ipv4", 00:38:35.798 "trsvcid": "4420", 00:38:35.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.798 "hdgst": false, 00:38:35.798 "ddgst": false 00:38:35.798 }, 00:38:35.798 "method": "bdev_nvme_attach_controller" 00:38:35.798 }' 00:38:35.798 [2024-10-01 01:56:15.438664] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:35.798 [2024-10-01 01:56:15.438765] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:35.798 [2024-10-01 01:56:15.439175] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:35.799 [2024-10-01 01:56:15.439179] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:35.799 [2024-10-01 01:56:15.439176] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:35.799 [2024-10-01 01:56:15.439257] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-01 01:56:15.439258] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-01 01:56:15.439260] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:35.799 --proc-type=auto ] 00:38:35.799 --proc-type=auto ] 00:38:35.799 [2024-10-01 01:56:15.616048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.057 [2024-10-01 01:56:15.690678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:38:36.057 [2024-10-01 01:56:15.716365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.057 [2024-10-01 01:56:15.791243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:38:36.057 [2024-10-01 01:56:15.814704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.057 [2024-10-01 01:56:15.881418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.057 [2024-10-01 01:56:15.887269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:36.316 [2024-10-01 01:56:15.947435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:38:36.316 Running I/O for 1 seconds... 00:38:36.316 Running I/O for 1 seconds... 00:38:36.576 Running I/O for 1 seconds... 00:38:36.576 Running I/O for 1 seconds... 00:38:37.513 11624.00 IOPS, 45.41 MiB/s 00:38:37.513 Latency(us) 00:38:37.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.513 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:37.513 Nvme1n1 : 1.01 11684.04 45.64 0.00 0.00 10916.52 4490.43 13301.38 00:38:37.513 =================================================================================================================== 00:38:37.513 Total : 11684.04 45.64 0.00 0.00 10916.52 4490.43 13301.38 00:38:37.513 7129.00 IOPS, 27.85 MiB/s 00:38:37.513 Latency(us) 00:38:37.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.513 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:37.513 Nvme1n1 : 1.01 7181.96 28.05 0.00 0.00 17720.69 5315.70 22233.69 00:38:37.513 =================================================================================================================== 00:38:37.513 Total : 7181.96 28.05 0.00 0.00 17720.69 5315.70 22233.69 00:38:37.771 8010.00 IOPS, 31.29 MiB/s 00:38:37.772 Latency(us) 00:38:37.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.772 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:37.772 Nvme1n1 : 1.01 8082.12 31.57 0.00 0.00 15770.59 5995.33 23398.78 00:38:37.772 =================================================================================================================== 00:38:37.772 Total : 8082.12 31.57 0.00 0.00 15770.59 5995.33 23398.78 00:38:37.772 199680.00 IOPS, 780.00 MiB/s 00:38:37.772 Latency(us) 00:38:37.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.772 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:37.772 Nvme1n1 : 1.00 199309.71 778.55 0.00 0.00 638.82 298.86 1844.72 00:38:37.772 =================================================================================================================== 00:38:37.772 Total : 199309.71 778.55 0.00 0.00 638.82 298.86 1844.72 00:38:37.772 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1093021 00:38:37.772 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1093023 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1093027 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.030 rmmod nvme_tcp 00:38:38.030 rmmod nvme_fabrics 00:38:38.030 rmmod nvme_keyring 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 1092906 ']' 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 1092906 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1092906 ']' 00:38:38.030 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1092906 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1092906 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1092906' 00:38:38.031 killing process with pid 1092906 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1092906 00:38:38.031 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1092906 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.291 01:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.195 01:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:40.195 00:38:40.195 real 0m7.345s 00:38:40.195 user 0m15.004s 00:38:40.195 sys 0m4.420s 00:38:40.195 01:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:40.195 01:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:40.195 ************************************ 00:38:40.195 END TEST nvmf_bdev_io_wait 00:38:40.195 ************************************ 00:38:40.195 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:40.195 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:40.195 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:40.195 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:40.195 ************************************ 00:38:40.195 START TEST nvmf_queue_depth 00:38:40.195 ************************************ 00:38:40.195 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:40.455 * Looking for test storage... 00:38:40.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.455 --rc genhtml_branch_coverage=1 00:38:40.455 --rc genhtml_function_coverage=1 00:38:40.455 --rc genhtml_legend=1 00:38:40.455 --rc geninfo_all_blocks=1 00:38:40.455 --rc geninfo_unexecuted_blocks=1 00:38:40.455 00:38:40.455 ' 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.455 --rc genhtml_branch_coverage=1 00:38:40.455 --rc genhtml_function_coverage=1 00:38:40.455 --rc genhtml_legend=1 00:38:40.455 --rc geninfo_all_blocks=1 00:38:40.455 --rc geninfo_unexecuted_blocks=1 00:38:40.455 00:38:40.455 ' 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:40.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.455 --rc genhtml_branch_coverage=1 00:38:40.455 --rc genhtml_function_coverage=1 00:38:40.455 --rc genhtml_legend=1 00:38:40.455 --rc geninfo_all_blocks=1 00:38:40.455 --rc geninfo_unexecuted_blocks=1 00:38:40.455 00:38:40.455 ' 00:38:40.455 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:40.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.456 --rc genhtml_branch_coverage=1 00:38:40.456 --rc genhtml_function_coverage=1 00:38:40.456 --rc genhtml_legend=1 00:38:40.456 --rc geninfo_all_blocks=1 00:38:40.456 --rc geninfo_unexecuted_blocks=1 00:38:40.456 00:38:40.456 ' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:40.456 01:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:42.355 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:42.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:42.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:42.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:42.356 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:42.356 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:42.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:42.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:38:42.614 00:38:42.614 --- 10.0.0.2 ping statistics --- 00:38:42.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.614 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:42.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:42.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:38:42.614 00:38:42.614 --- 10.0.0.1 ping statistics --- 00:38:42.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.614 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=1095276 00:38:42.614 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 1095276 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1095276 ']' 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:42.615 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:42.615 [2024-10-01 01:56:22.450937] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:42.615 [2024-10-01 01:56:22.452102] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:42.615 [2024-10-01 01:56:22.452176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:42.873 [2024-10-01 01:56:22.521869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.873 [2024-10-01 01:56:22.605941] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.873 [2024-10-01 01:56:22.605995] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.873 [2024-10-01 01:56:22.606042] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:42.873 [2024-10-01 01:56:22.606055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:42.873 [2024-10-01 01:56:22.606065] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.873 [2024-10-01 01:56:22.606093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.873 [2024-10-01 01:56:22.690560] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:42.873 [2024-10-01 01:56:22.690893] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:42.873 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:42.873 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:42.873 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:42.873 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:42.873 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.132 [2024-10-01 01:56:22.746625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.132 Malloc0 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.132 [2024-10-01 01:56:22.802781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1095296 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1095296 /var/tmp/bdevperf.sock 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1095296 ']' 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:43.132 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:43.133 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:43.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:43.133 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:43.133 01:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.133 [2024-10-01 01:56:22.855468] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:38:43.133 [2024-10-01 01:56:22.855556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095296 ] 00:38:43.133 [2024-10-01 01:56:22.915769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.392 [2024-10-01 01:56:23.002215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.392 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:43.392 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:43.392 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:43.392 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.392 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:43.650 NVMe0n1 00:38:43.650 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.650 01:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:43.650 Running I/O for 10 seconds... 00:38:53.928 8163.00 IOPS, 31.89 MiB/s 8192.00 IOPS, 32.00 MiB/s 8199.00 IOPS, 32.03 MiB/s 8290.00 IOPS, 32.38 MiB/s 8382.40 IOPS, 32.74 MiB/s 8365.17 IOPS, 32.68 MiB/s 8344.29 IOPS, 32.59 MiB/s 8336.50 IOPS, 32.56 MiB/s 8371.56 IOPS, 32.70 MiB/s 8388.60 IOPS, 32.77 MiB/s 00:38:53.928 Latency(us) 00:38:53.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.928 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:53.928 Verification LBA range: start 0x0 length 0x4000 00:38:53.928 NVMe0n1 : 10.10 8402.03 32.82 0.00 0.00 121298.40 24272.59 74177.04 00:38:53.928 =================================================================================================================== 00:38:53.928 Total : 8402.03 32.82 0.00 0.00 121298.40 24272.59 74177.04 00:38:53.928 { 00:38:53.928 "results": [ 00:38:53.928 { 00:38:53.928 "job": "NVMe0n1", 00:38:53.928 "core_mask": "0x1", 00:38:53.928 "workload": "verify", 00:38:53.928 "status": "finished", 00:38:53.928 "verify_range": { 00:38:53.928 "start": 0, 00:38:53.928 "length": 16384 00:38:53.928 }, 00:38:53.928 "queue_depth": 1024, 00:38:53.928 "io_size": 4096, 00:38:53.928 "runtime": 10.102204, 00:38:53.928 "iops": 8402.027913908687, 00:38:53.928 "mibps": 32.82042153870581, 00:38:53.928 "io_failed": 0, 00:38:53.928 "io_timeout": 0, 00:38:53.928 "avg_latency_us": 121298.40305410797, 00:38:53.928 "min_latency_us": 24272.59259259259, 00:38:53.928 "max_latency_us": 74177.04296296297 00:38:53.928 } 00:38:53.928 ], 00:38:53.928 "core_count": 1 00:38:53.928 } 00:38:53.928 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1095296 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1095296 ']' 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1095296 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1095296 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1095296' 00:38:53.929 killing process with pid 1095296 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1095296 00:38:53.929 Received shutdown signal, test time was about 10.000000 seconds 00:38:53.929 00:38:53.929 Latency(us) 00:38:53.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.929 =================================================================================================================== 00:38:53.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:53.929 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1095296 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:54.188 rmmod nvme_tcp 00:38:54.188 rmmod nvme_fabrics 00:38:54.188 rmmod nvme_keyring 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 1095276 ']' 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 1095276 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1095276 ']' 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1095276 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1095276 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1095276' 00:38:54.188 killing process with pid 1095276 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1095276 00:38:54.188 01:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1095276 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.446 01:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:56.982 00:38:56.982 real 0m16.269s 00:38:56.982 user 0m22.487s 00:38:56.982 sys 0m3.356s 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.982 ************************************ 00:38:56.982 END TEST nvmf_queue_depth 00:38:56.982 ************************************ 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:56.982 ************************************ 00:38:56.982 START TEST nvmf_target_multipath 00:38:56.982 ************************************ 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:56.982 * Looking for test storage... 00:38:56.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.982 --rc genhtml_branch_coverage=1 00:38:56.982 --rc genhtml_function_coverage=1 00:38:56.982 --rc genhtml_legend=1 00:38:56.982 --rc geninfo_all_blocks=1 00:38:56.982 --rc geninfo_unexecuted_blocks=1 00:38:56.982 00:38:56.982 ' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.982 --rc genhtml_branch_coverage=1 00:38:56.982 --rc genhtml_function_coverage=1 00:38:56.982 --rc genhtml_legend=1 00:38:56.982 --rc geninfo_all_blocks=1 00:38:56.982 --rc geninfo_unexecuted_blocks=1 00:38:56.982 00:38:56.982 ' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.982 --rc genhtml_branch_coverage=1 00:38:56.982 --rc genhtml_function_coverage=1 00:38:56.982 --rc genhtml_legend=1 00:38:56.982 --rc geninfo_all_blocks=1 00:38:56.982 --rc geninfo_unexecuted_blocks=1 00:38:56.982 00:38:56.982 ' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.982 --rc genhtml_branch_coverage=1 00:38:56.982 --rc genhtml_function_coverage=1 00:38:56.982 --rc genhtml_legend=1 00:38:56.982 --rc geninfo_all_blocks=1 00:38:56.982 --rc geninfo_unexecuted_blocks=1 00:38:56.982 00:38:56.982 ' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.982 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:56.983 01:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:58.885 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:58.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:58.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:58.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:58.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:58.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:58.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:38:58.886 00:38:58.886 --- 10.0.0.2 ping statistics --- 00:38:58.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.886 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:58.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:58.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:38:58.886 00:38:58.886 --- 10.0.0.1 ping statistics --- 00:38:58.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.886 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:58.886 only one NIC for nvmf test 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:58.886 rmmod nvme_tcp 00:38:58.886 rmmod nvme_fabrics 00:38:58.886 rmmod nvme_keyring 00:38:58.886 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.887 01:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.418 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:01.418 00:39:01.418 real 0m4.416s 00:39:01.418 user 0m0.889s 00:39:01.418 sys 0m1.518s 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:01.419 ************************************ 00:39:01.419 END TEST nvmf_target_multipath 00:39:01.419 ************************************ 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:01.419 ************************************ 00:39:01.419 START TEST nvmf_zcopy 00:39:01.419 ************************************ 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:01.419 * Looking for test storage... 00:39:01.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:01.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.419 --rc genhtml_branch_coverage=1 00:39:01.419 --rc genhtml_function_coverage=1 00:39:01.419 --rc genhtml_legend=1 00:39:01.419 --rc geninfo_all_blocks=1 00:39:01.419 --rc geninfo_unexecuted_blocks=1 00:39:01.419 00:39:01.419 ' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:01.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.419 --rc genhtml_branch_coverage=1 00:39:01.419 --rc genhtml_function_coverage=1 00:39:01.419 --rc genhtml_legend=1 00:39:01.419 --rc geninfo_all_blocks=1 00:39:01.419 --rc geninfo_unexecuted_blocks=1 00:39:01.419 00:39:01.419 ' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:01.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.419 --rc genhtml_branch_coverage=1 00:39:01.419 --rc genhtml_function_coverage=1 00:39:01.419 --rc genhtml_legend=1 00:39:01.419 --rc geninfo_all_blocks=1 00:39:01.419 --rc geninfo_unexecuted_blocks=1 00:39:01.419 00:39:01.419 ' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:01.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.419 --rc genhtml_branch_coverage=1 00:39:01.419 --rc genhtml_function_coverage=1 00:39:01.419 --rc genhtml_legend=1 00:39:01.419 --rc geninfo_all_blocks=1 00:39:01.419 --rc geninfo_unexecuted_blocks=1 00:39:01.419 00:39:01.419 ' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.419 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:01.420 01:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:03.364 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:03.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:03.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:03.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:03.364 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:03.365 01:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:03.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:03.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:39:03.365 00:39:03.365 --- 10.0.0.2 ping statistics --- 00:39:03.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.365 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:03.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:03.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:39:03.365 00:39:03.365 --- 10.0.0.1 ping statistics --- 00:39:03.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.365 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=1100468 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 1100468 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1100468 ']' 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:03.365 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.365 [2024-10-01 01:56:43.197734] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:03.365 [2024-10-01 01:56:43.198791] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:03.365 [2024-10-01 01:56:43.198857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:03.623 [2024-10-01 01:56:43.268157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.623 [2024-10-01 01:56:43.359857] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:03.623 [2024-10-01 01:56:43.359920] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:03.623 [2024-10-01 01:56:43.359947] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:03.623 [2024-10-01 01:56:43.359961] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:03.623 [2024-10-01 01:56:43.359972] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:03.623 [2024-10-01 01:56:43.360013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.623 [2024-10-01 01:56:43.448673] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:03.623 [2024-10-01 01:56:43.449034] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:03.624 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.624 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:39:03.624 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:03.624 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:03.624 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 [2024-10-01 01:56:43.500649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 [2024-10-01 01:56:43.516803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 malloc0 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:03.882 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:03.882 { 00:39:03.882 "params": { 00:39:03.882 "name": "Nvme$subsystem", 00:39:03.882 "trtype": "$TEST_TRANSPORT", 00:39:03.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:03.882 "adrfam": "ipv4", 00:39:03.882 "trsvcid": "$NVMF_PORT", 00:39:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:03.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:03.883 "hdgst": ${hdgst:-false}, 00:39:03.883 "ddgst": ${ddgst:-false} 00:39:03.883 }, 00:39:03.883 "method": "bdev_nvme_attach_controller" 00:39:03.883 } 00:39:03.883 EOF 00:39:03.883 )") 00:39:03.883 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:03.883 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:03.883 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:03.883 01:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:03.883 "params": { 00:39:03.883 "name": "Nvme1", 00:39:03.883 "trtype": "tcp", 00:39:03.883 "traddr": "10.0.0.2", 00:39:03.883 "adrfam": "ipv4", 00:39:03.883 "trsvcid": "4420", 00:39:03.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:03.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:03.883 "hdgst": false, 00:39:03.883 "ddgst": false 00:39:03.883 }, 00:39:03.883 "method": "bdev_nvme_attach_controller" 00:39:03.883 }' 00:39:03.883 [2024-10-01 01:56:43.615161] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:03.883 [2024-10-01 01:56:43.615251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100498 ] 00:39:03.883 [2024-10-01 01:56:43.678502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.141 [2024-10-01 01:56:43.769339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.399 Running I/O for 10 seconds... 00:39:14.619 5344.00 IOPS, 41.75 MiB/s 5377.00 IOPS, 42.01 MiB/s 5345.67 IOPS, 41.76 MiB/s 5363.00 IOPS, 41.90 MiB/s 5385.80 IOPS, 42.08 MiB/s 5388.83 IOPS, 42.10 MiB/s 5401.71 IOPS, 42.20 MiB/s 5411.12 IOPS, 42.27 MiB/s 5411.67 IOPS, 42.28 MiB/s 5404.90 IOPS, 42.23 MiB/s 00:39:14.619 Latency(us) 00:39:14.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.619 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:14.619 Verification LBA range: start 0x0 length 0x1000 00:39:14.619 Nvme1n1 : 10.01 5409.72 42.26 0.00 0.00 23595.09 2208.81 32428.18 00:39:14.619 =================================================================================================================== 00:39:14.619 Total : 5409.72 42.26 0.00 0.00 23595.09 2208.81 32428.18 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1101790 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:14.619 { 00:39:14.619 "params": { 00:39:14.619 "name": "Nvme$subsystem", 00:39:14.619 "trtype": "$TEST_TRANSPORT", 00:39:14.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:14.619 "adrfam": "ipv4", 00:39:14.619 "trsvcid": "$NVMF_PORT", 00:39:14.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:14.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:14.619 "hdgst": ${hdgst:-false}, 00:39:14.619 "ddgst": ${ddgst:-false} 00:39:14.619 }, 00:39:14.619 "method": "bdev_nvme_attach_controller" 00:39:14.619 } 00:39:14.619 EOF 00:39:14.619 )") 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:39:14.619 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:39:14.619 [2024-10-01 01:56:54.384557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.619 [2024-10-01 01:56:54.384606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:39:14.620 01:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:14.620 "params": { 00:39:14.620 "name": "Nvme1", 00:39:14.620 "trtype": "tcp", 00:39:14.620 "traddr": "10.0.0.2", 00:39:14.620 "adrfam": "ipv4", 00:39:14.620 "trsvcid": "4420", 00:39:14.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:14.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:14.620 "hdgst": false, 00:39:14.620 "ddgst": false 00:39:14.620 }, 00:39:14.620 "method": "bdev_nvme_attach_controller" 00:39:14.620 }' 00:39:14.620 [2024-10-01 01:56:54.392485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.392512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.400482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.400506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.408488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.408514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.416496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.416524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.424483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.424508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.427062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:14.620 [2024-10-01 01:56:54.427126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101790 ] 00:39:14.620 [2024-10-01 01:56:54.432481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.432505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.440482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.440505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.448480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.448504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.456481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.456505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.464482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.464505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.620 [2024-10-01 01:56:54.472482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.620 [2024-10-01 01:56:54.472506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.480481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.480505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.488481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.488505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.496481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.496504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.496608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.878 [2024-10-01 01:56:54.504520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.504557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.512509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.512546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.520483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.520521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.528481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.528505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.536483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.878 [2024-10-01 01:56:54.536507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.878 [2024-10-01 01:56:54.544484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.544509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.552515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.552553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.560486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.560512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.568483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.568508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.576482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.576506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.584481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.584505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.589803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.879 [2024-10-01 01:56:54.592482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.592505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.600482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.600506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.608513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.608551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.616518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.616559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.624518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.624560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.632518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.632561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.640520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.640562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.648523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.648564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.656484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.656508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.664518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.664572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.672520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.672560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.680500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.680533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.688482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.688505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.696491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.696520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.704490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.704518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.712488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.712515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.720589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.720617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:14.879 [2024-10-01 01:56:54.728485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:14.879 [2024-10-01 01:56:54.728506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.736484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.736509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.744482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.744506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.752481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.752506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.760480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.760504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.768488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.768515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.776488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.776514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.784483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.784508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.792482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.792506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.800482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.800506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.808481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.808505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.816486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.816520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.824488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.824515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.833453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.833483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.840490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.840518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 Running I/O for 5 seconds... 00:39:15.137 [2024-10-01 01:56:54.848490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.848519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.867862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.867892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.881783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.881814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.895750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.895782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.137 [2024-10-01 01:56:54.909582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.137 [2024-10-01 01:56:54.909613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.138 [2024-10-01 01:56:54.923271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.138 [2024-10-01 01:56:54.923303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.138 [2024-10-01 01:56:54.936540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.138 [2024-10-01 01:56:54.936570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.138 [2024-10-01 01:56:54.950233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.138 [2024-10-01 01:56:54.950264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.138 [2024-10-01 01:56:54.963453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.138 [2024-10-01 01:56:54.963480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.138 [2024-10-01 01:56:54.976931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.138 [2024-10-01 01:56:54.976962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.138 [2024-10-01 01:56:54.990696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.138 [2024-10-01 01:56:54.990727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.005333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.005364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.021620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.021647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.034201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.034232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.049323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.049355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.062603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.062642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.076290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.076320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.090051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.090082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.104366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.104400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.117567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.117598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.131433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.131463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.144859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.144889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.156617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.156644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.169673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.169704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.182235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.182261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.197338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.197381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.208622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.208664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.221962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.396 [2024-10-01 01:56:55.221993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.396 [2024-10-01 01:56:55.235340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.397 [2024-10-01 01:56:55.235383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.397 [2024-10-01 01:56:55.248666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.397 [2024-10-01 01:56:55.248708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.655 [2024-10-01 01:56:55.261876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.655 [2024-10-01 01:56:55.261907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.275547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.275577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.288952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.288983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.301177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.301205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.316078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.316105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.329431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.329473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.346207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.346247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.357607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.357635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.372726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.372757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.386878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.386908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.400490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.400535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.414068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.414097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.427622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.427651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.441445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.441478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.455851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.455883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.470020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.470063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.483017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.483061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.656 [2024-10-01 01:56:55.497027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.656 [2024-10-01 01:56:55.497078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.510864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.510895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.524581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.524613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.538872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.538904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.552866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.552897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.566766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.566796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.580183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.580211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.593656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.593688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.607061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.607088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.620254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.620287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.634024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.634073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.647561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.647593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.660833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.660875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.674727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.674758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.687927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.687959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.701378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.701409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.715537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.715577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.728775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.728820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.741949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.914 [2024-10-01 01:56:55.741980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:15.914 [2024-10-01 01:56:55.756205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:15.915 [2024-10-01 01:56:55.756234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.172 [2024-10-01 01:56:55.769654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.769685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.783218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.783246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.796955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.796986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.809416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.809448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.823723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.823754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.836807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.836833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.849465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.849496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 9323.00 IOPS, 72.84 MiB/s [2024-10-01 01:56:55.862224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.862256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.875737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.875768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.888750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.888781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.907094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.907121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.919010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.919055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.934483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.934515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.948072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.948100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.961862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.961893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.976271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.976302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:55.990502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:55.990548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:56.004245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:56.004277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.173 [2024-10-01 01:56:56.017200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.173 [2024-10-01 01:56:56.017229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.030676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.030707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.043482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.043510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.057801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.057832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.073612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.073644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.084527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.084563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.097900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.097931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.111570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.111601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.124736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.124763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.138046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.138074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.151683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.151727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.165614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.165645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.178536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.178579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.192079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.192106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.204886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.204917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.220674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.220702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.232577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.232609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.245325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.245366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.258382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.258414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.271239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.271268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.431 [2024-10-01 01:56:56.284336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.431 [2024-10-01 01:56:56.284363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.297856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.297883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.314179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.314207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.325498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.325529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.339224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.339261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.352482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.352508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.365217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.365245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.377293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.377324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.395601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.395633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.408074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.408104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.421151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.421179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.690 [2024-10-01 01:56:56.434384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.690 [2024-10-01 01:56:56.434412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.448467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.448499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.461834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.461862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.474765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.474792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.487995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.488033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.500856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.500886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.514174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.514202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.527795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.527841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.691 [2024-10-01 01:56:56.541402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.691 [2024-10-01 01:56:56.541445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.560299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.560327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.572581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.572627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.585500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.585539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.602044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.602101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.614074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.614102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.631210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.631240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.643399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.643434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.949 [2024-10-01 01:56:56.657340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.949 [2024-10-01 01:56:56.657366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.670647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.670677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.684478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.684506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.698960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.698991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.712643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.712669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.726255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.726299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.739365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.739391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.752500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.752527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.765613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.765641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.778590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.778617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.950 [2024-10-01 01:56:56.791651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.950 [2024-10-01 01:56:56.791678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.804874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.804901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.818348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.818394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.835074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.835103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.846637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.846664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 9425.00 IOPS, 73.63 MiB/s [2024-10-01 01:56:56.861183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.861210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.872534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.872574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.887136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.887163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.899986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.900042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.912853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.912884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.926211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.926238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.941510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.941538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.953154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.953182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.966744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.966771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.979773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.979817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:56.993341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:56.993385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:57.006419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:57.006449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.208 [2024-10-01 01:56:57.024529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.208 [2024-10-01 01:56:57.024559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.209 [2024-10-01 01:56:57.036210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.209 [2024-10-01 01:56:57.036237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.209 [2024-10-01 01:56:57.050078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.209 [2024-10-01 01:56:57.050105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.065076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.065105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.076217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.076244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.090544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.090571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.104188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.104218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.117087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.117114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.128843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.128872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.146264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.146291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.158988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.159024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.173133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.173161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.186071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.186097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.203543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.203588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.217253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.217302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.229973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.230026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.246067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.246095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.258337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.258363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.275239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.275267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.288198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.466 [2024-10-01 01:56:57.288225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.466 [2024-10-01 01:56:57.301799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.467 [2024-10-01 01:56:57.301829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.467 [2024-10-01 01:56:57.314596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.467 [2024-10-01 01:56:57.314626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.327166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.327194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.340984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.341022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.353887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.353914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.367401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.367442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.380406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.380433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.393171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.393199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.409944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.409990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.423120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.423151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.437747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.437777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.450795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.450826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.463661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.463692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.476114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.476152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.489917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.489961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.503622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.503664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.516982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.517038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.530434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.530465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.543769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.543796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.557045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.557072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.725 [2024-10-01 01:56:57.569869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.725 [2024-10-01 01:56:57.569913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.583365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.583391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.595972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.596010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.609157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.609184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.620775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.620826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.633296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.633324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.646685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.646712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.659867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.659897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.673074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.673104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.684985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.685023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.699410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.699454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.713394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.713425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.726238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.726273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.740212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.740240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.753370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.753400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.766602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.766633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.779494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.779523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.792710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.792738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.806238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.806266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.819319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.819361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.984 [2024-10-01 01:56:57.832422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.984 [2024-10-01 01:56:57.832452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.844788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.844816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 9497.67 IOPS, 74.20 MiB/s [2024-10-01 01:56:57.857953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.857980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.871353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.871401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.885616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.885647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.899215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.899242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.912434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.912465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.925468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.925496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.943299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.943327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.956170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.956198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.969487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.969515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:57.987579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:57.987606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.000148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.000174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.014090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.014118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.026743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.026770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.040235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.040262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.053402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.053443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.069525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.069551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.080620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.080646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.243 [2024-10-01 01:56:58.094203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.243 [2024-10-01 01:56:58.094231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.107593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.107623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.120495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.120535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.133354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.133407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.146603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.146630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.159605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.159632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.172814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.172859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.186362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.186387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.199966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.200006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.212409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.212435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.225728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.225754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.239321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.239362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.252727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.252757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.266018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.266045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.278891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.278917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.292365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.501 [2024-10-01 01:56:58.292398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.501 [2024-10-01 01:56:58.305464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.502 [2024-10-01 01:56:58.305494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.502 [2024-10-01 01:56:58.318761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.502 [2024-10-01 01:56:58.318802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.502 [2024-10-01 01:56:58.332931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.502 [2024-10-01 01:56:58.332958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.502 [2024-10-01 01:56:58.351755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.502 [2024-10-01 01:56:58.351796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.363498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.363525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.377457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.377485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.390451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.390493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.404166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.404194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.417494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.417536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.431312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.431342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.445369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.445400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.458421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.458447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.473258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.473301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.484073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.484102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.499358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.499385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.512533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.512573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.525705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.525732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.539774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.539799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.553266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.553293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.567154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.567180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.580652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.580679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.593943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.593985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.760 [2024-10-01 01:56:58.607232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.760 [2024-10-01 01:56:58.607259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.619752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.619778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.632843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.632873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.645489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.645516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.658381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.658407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.671126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.671153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.684083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.684110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.698082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.698107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.710871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.710898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.724426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.724453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.737800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.737845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.751475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.751500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.764224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.764250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.777206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.777234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.791772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.791813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.805139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.805164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.818073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.818100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.836251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.836293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 [2024-10-01 01:56:58.850007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.850039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.018 9512.00 IOPS, 74.31 MiB/s [2024-10-01 01:56:58.863117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.018 [2024-10-01 01:56:58.863145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.875986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.876048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.889364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.889393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.904983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.905026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.916542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.916568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.931010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.931038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.944127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.944155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.957291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.957322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.970161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.970193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.983819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.983851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:58.997326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:58.997358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.010440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.010470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.028512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.028543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.039780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.039823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.054350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.054381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.067794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.067821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.080543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.080569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.094376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.094402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.107931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.107972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.277 [2024-10-01 01:56:59.121592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.277 [2024-10-01 01:56:59.121618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.135033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.135062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.148695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.148744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.162416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.162446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.175070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.175098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.188466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.188492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.200845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.200889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.214077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.214101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.535 [2024-10-01 01:56:59.227175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.535 [2024-10-01 01:56:59.227201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.239518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.239544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.252730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.252761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.266021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.266048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.279679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.279709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.292892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.292916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.305013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.305044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.319138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.319165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.332347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.332377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.345374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.345406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.363528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.363555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.536 [2024-10-01 01:56:59.375317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.536 [2024-10-01 01:56:59.375361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.389582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.389610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.405620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.405659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.416899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.416925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.431368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.431395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.445476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.445506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.461668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.461698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.472662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.472692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.486583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.486609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.500691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.500722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.513726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.513753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.527058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.527100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.539721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.539764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.553211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.553239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.571786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.571812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.584513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.584553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.598158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.598186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.610834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.610860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.623700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.623741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.794 [2024-10-01 01:56:59.637235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.794 [2024-10-01 01:56:59.637260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.650222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.650249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.667079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.667128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.678291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.678317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.695118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.695145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.706898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.706928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.721462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.721488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.734973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.735013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.747645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.747669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.760552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.760578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.773812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.773837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.786674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.786700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.799780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.799806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.812804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.812834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.825091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.825119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.842991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.843045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.854524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.854551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 9535.20 IOPS, 74.49 MiB/s [2024-10-01 01:56:59.868922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.868954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 00:39:20.054 Latency(us) 00:39:20.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:20.054 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:20.054 Nvme1n1 : 5.01 9536.24 74.50 0.00 0.00 13402.17 3470.98 21262.79 00:39:20.054 =================================================================================================================== 00:39:20.054 Total : 9536.24 74.50 0.00 0.00 13402.17 3470.98 21262.79 00:39:20.054 [2024-10-01 01:56:59.876609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.876640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.884489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.884517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.892497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.892534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.054 [2024-10-01 01:56:59.900537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.054 [2024-10-01 01:56:59.900587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.908533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.908580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.916536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.916585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.924527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.924575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.932534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.932583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.940531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.940580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.948535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.948584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.956537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.956586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.964536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.964586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.972532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.972580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.980533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.980581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.988530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.988578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:56:59.996530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:56:59.996578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.004500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.004533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.012518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.012554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.020595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.020659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.028553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.028600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.036517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.036553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.044486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.044516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.052539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.052586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.060551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.060601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.068518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.068560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.076511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.076541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 [2024-10-01 01:57:00.084499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.313 [2024-10-01 01:57:00.084529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1101790) - No such process 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1101790 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:20.313 delay0 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.313 01:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:20.571 [2024-10-01 01:57:00.247168] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:28.680 Initializing NVMe Controllers 00:39:28.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:28.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:28.680 Initialization complete. Launching workers. 00:39:28.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 254, failed: 18382 00:39:28.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18524, failed to submit 112 00:39:28.680 success 18417, unsuccessful 107, failed 0 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:28.680 rmmod nvme_tcp 00:39:28.680 rmmod nvme_fabrics 00:39:28.680 rmmod nvme_keyring 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 1100468 ']' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 1100468 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1100468 ']' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1100468 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1100468 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1100468' 00:39:28.680 killing process with pid 1100468 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1100468 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1100468 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.680 01:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:30.053 00:39:30.053 real 0m28.867s 00:39:30.053 user 0m40.062s 00:39:30.053 sys 0m10.676s 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:30.053 ************************************ 00:39:30.053 END TEST nvmf_zcopy 00:39:30.053 ************************************ 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:30.053 ************************************ 00:39:30.053 START TEST nvmf_nmic 00:39:30.053 ************************************ 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:30.053 * Looking for test storage... 00:39:30.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:30.053 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:30.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.054 --rc genhtml_branch_coverage=1 00:39:30.054 --rc genhtml_function_coverage=1 00:39:30.054 --rc genhtml_legend=1 00:39:30.054 --rc geninfo_all_blocks=1 00:39:30.054 --rc geninfo_unexecuted_blocks=1 00:39:30.054 00:39:30.054 ' 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:30.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.054 --rc genhtml_branch_coverage=1 00:39:30.054 --rc genhtml_function_coverage=1 00:39:30.054 --rc genhtml_legend=1 00:39:30.054 --rc geninfo_all_blocks=1 00:39:30.054 --rc geninfo_unexecuted_blocks=1 00:39:30.054 00:39:30.054 ' 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:30.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.054 --rc genhtml_branch_coverage=1 00:39:30.054 --rc genhtml_function_coverage=1 00:39:30.054 --rc genhtml_legend=1 00:39:30.054 --rc geninfo_all_blocks=1 00:39:30.054 --rc geninfo_unexecuted_blocks=1 00:39:30.054 00:39:30.054 ' 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:30.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:30.054 --rc genhtml_branch_coverage=1 00:39:30.054 --rc genhtml_function_coverage=1 00:39:30.054 --rc genhtml_legend=1 00:39:30.054 --rc geninfo_all_blocks=1 00:39:30.054 --rc geninfo_unexecuted_blocks=1 00:39:30.054 00:39:30.054 ' 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:30.054 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:30.313 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:30.314 01:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:32.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:32.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:32.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:32.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:32.216 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:32.217 01:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:32.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:39:32.217 00:39:32.217 --- 10.0.0.2 ping statistics --- 00:39:32.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.217 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:32.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:39:32.217 00:39:32.217 --- 10.0.0.1 ping statistics --- 00:39:32.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.217 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=1105794 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 1105794 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1105794 ']' 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:32.217 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.477 [2024-10-01 01:57:12.081382] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:32.477 [2024-10-01 01:57:12.082527] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:32.477 [2024-10-01 01:57:12.082586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.477 [2024-10-01 01:57:12.154598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:32.477 [2024-10-01 01:57:12.246875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.477 [2024-10-01 01:57:12.246938] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.477 [2024-10-01 01:57:12.246955] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:32.477 [2024-10-01 01:57:12.246969] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:32.477 [2024-10-01 01:57:12.246981] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.477 [2024-10-01 01:57:12.247053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.477 [2024-10-01 01:57:12.247110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:32.477 [2024-10-01 01:57:12.247369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:32.477 [2024-10-01 01:57:12.247372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.738 [2024-10-01 01:57:12.356414] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:32.738 [2024-10-01 01:57:12.356621] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:32.738 [2024-10-01 01:57:12.357060] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:32.738 [2024-10-01 01:57:12.357582] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:32.738 [2024-10-01 01:57:12.357858] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 [2024-10-01 01:57:12.407969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 Malloc0 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 [2024-10-01 01:57:12.464191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:32.738 test case1: single bdev can't be used in multiple subsystems 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.738 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.738 [2024-10-01 01:57:12.487889] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:32.739 [2024-10-01 01:57:12.487918] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:32.739 [2024-10-01 01:57:12.487949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:32.739 request: 00:39:32.739 { 00:39:32.739 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:32.739 "namespace": { 00:39:32.739 "bdev_name": "Malloc0", 00:39:32.739 "no_auto_visible": false 00:39:32.739 }, 00:39:32.739 "method": "nvmf_subsystem_add_ns", 00:39:32.739 "req_id": 1 00:39:32.739 } 00:39:32.739 Got JSON-RPC error response 00:39:32.739 response: 00:39:32.739 { 00:39:32.739 "code": -32602, 00:39:32.739 "message": "Invalid parameters" 00:39:32.739 } 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:32.739 Adding namespace failed - expected result. 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:32.739 test case2: host connect to nvmf target in multiple paths 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:32.739 [2024-10-01 01:57:12.495975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.739 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:33.029 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:33.288 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:33.288 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:33.288 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:33.288 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:33.288 01:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:35.191 01:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:35.191 [global] 00:39:35.191 thread=1 00:39:35.191 invalidate=1 00:39:35.191 rw=write 00:39:35.191 time_based=1 00:39:35.191 runtime=1 00:39:35.191 ioengine=libaio 00:39:35.191 direct=1 00:39:35.191 bs=4096 00:39:35.191 iodepth=1 00:39:35.191 norandommap=0 00:39:35.191 numjobs=1 00:39:35.191 00:39:35.191 verify_dump=1 00:39:35.191 verify_backlog=512 00:39:35.191 verify_state_save=0 00:39:35.191 do_verify=1 00:39:35.191 verify=crc32c-intel 00:39:35.191 [job0] 00:39:35.191 filename=/dev/nvme0n1 00:39:35.191 Could not set queue depth (nvme0n1) 00:39:35.449 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.449 fio-3.35 00:39:35.449 Starting 1 thread 00:39:36.823 00:39:36.823 job0: (groupid=0, jobs=1): err= 0: pid=1106295: Tue Oct 1 01:57:16 2024 00:39:36.823 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:36.823 slat (nsec): min=4338, max=35232, avg=10223.62, stdev=3984.84 00:39:36.823 clat (usec): min=279, max=580, avg=364.63, stdev=62.49 00:39:36.823 lat (usec): min=284, max=592, avg=374.85, stdev=64.47 00:39:36.823 clat percentiles (usec): 00:39:36.823 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:39:36.823 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 383], 60.00th=[ 383], 00:39:36.823 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 453], 95.00th=[ 515], 00:39:36.823 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 578], 99.95th=[ 578], 00:39:36.823 | 99.99th=[ 578] 00:39:36.823 write: IOPS=1991, BW=7964KiB/s (8155kB/s)(7972KiB/1001msec); 0 zone resets 00:39:36.823 slat (nsec): min=5568, max=34758, avg=7225.99, stdev=2453.65 00:39:36.823 clat (usec): min=162, max=382, avg=201.24, stdev=29.53 00:39:36.823 lat (usec): min=168, max=396, avg=208.47, stdev=29.85 00:39:36.823 clat percentiles (usec): 00:39:36.823 | 1.00th=[ 167], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 174], 00:39:36.823 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 198], 60.00th=[ 210], 00:39:36.823 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 249], 00:39:36.823 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 379], 99.95th=[ 383], 00:39:36.823 | 99.99th=[ 383] 00:39:36.823 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:39:36.823 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:36.823 lat (usec) : 250=54.63%, 500=42.87%, 750=2.49% 00:39:36.823 cpu : usr=2.30%, sys=2.40%, ctx=3529, majf=0, minf=1 00:39:36.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:36.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.823 issued rwts: total=1536,1993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:36.823 00:39:36.823 Run status group 0 (all jobs): 00:39:36.823 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:39:36.823 WRITE: bw=7964KiB/s (8155kB/s), 7964KiB/s-7964KiB/s (8155kB/s-8155kB/s), io=7972KiB (8163kB), run=1001-1001msec 00:39:36.823 00:39:36.823 Disk stats (read/write): 00:39:36.823 nvme0n1: ios=1586/1573, merge=0/0, ticks=588/272, in_queue=860, util=91.78% 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:36.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:36.823 rmmod nvme_tcp 00:39:36.823 rmmod nvme_fabrics 00:39:36.823 rmmod nvme_keyring 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:36.823 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 1105794 ']' 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 1105794 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1105794 ']' 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1105794 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1105794 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1105794' 00:39:36.824 killing process with pid 1105794 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1105794 00:39:36.824 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1105794 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.085 01:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.985 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:38.985 00:39:38.985 real 0m9.096s 00:39:38.985 user 0m16.952s 00:39:38.985 sys 0m3.268s 00:39:38.985 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:38.985 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:38.985 ************************************ 00:39:38.985 END TEST nvmf_nmic 00:39:38.985 ************************************ 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:39.244 ************************************ 00:39:39.244 START TEST nvmf_fio_target 00:39:39.244 ************************************ 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:39.244 * Looking for test storage... 00:39:39.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:39:39.244 01:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.244 --rc genhtml_branch_coverage=1 00:39:39.244 --rc genhtml_function_coverage=1 00:39:39.244 --rc genhtml_legend=1 00:39:39.244 --rc geninfo_all_blocks=1 00:39:39.244 --rc geninfo_unexecuted_blocks=1 00:39:39.244 00:39:39.244 ' 00:39:39.244 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.245 --rc genhtml_branch_coverage=1 00:39:39.245 --rc genhtml_function_coverage=1 00:39:39.245 --rc genhtml_legend=1 00:39:39.245 --rc geninfo_all_blocks=1 00:39:39.245 --rc geninfo_unexecuted_blocks=1 00:39:39.245 00:39:39.245 ' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:39.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.245 --rc genhtml_branch_coverage=1 00:39:39.245 --rc genhtml_function_coverage=1 00:39:39.245 --rc genhtml_legend=1 00:39:39.245 --rc geninfo_all_blocks=1 00:39:39.245 --rc geninfo_unexecuted_blocks=1 00:39:39.245 00:39:39.245 ' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:39.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.245 --rc genhtml_branch_coverage=1 00:39:39.245 --rc genhtml_function_coverage=1 00:39:39.245 --rc genhtml_legend=1 00:39:39.245 --rc geninfo_all_blocks=1 00:39:39.245 --rc geninfo_unexecuted_blocks=1 00:39:39.245 00:39:39.245 ' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:39.245 01:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:41.152 01:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.152 01:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:41.152 01:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:41.152 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.153 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:41.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:41.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:41.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:41.414 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:41.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:41.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:41.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:39:41.415 00:39:41.415 --- 10.0.0.2 ping statistics --- 00:39:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.415 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:41.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:41.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:39:41.415 00:39:41.415 --- 10.0.0.1 ping statistics --- 00:39:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.415 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=1108372 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 1108372 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1108372 ']' 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:41.415 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:41.415 [2024-10-01 01:57:21.212637] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:41.415 [2024-10-01 01:57:21.213734] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:39:41.415 [2024-10-01 01:57:21.213812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.674 [2024-10-01 01:57:21.280874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:41.674 [2024-10-01 01:57:21.372063] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:41.674 [2024-10-01 01:57:21.372124] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:41.674 [2024-10-01 01:57:21.372153] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:41.674 [2024-10-01 01:57:21.372165] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:41.674 [2024-10-01 01:57:21.372175] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:41.674 [2024-10-01 01:57:21.372246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.675 [2024-10-01 01:57:21.372589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:41.675 [2024-10-01 01:57:21.372616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:41.675 [2024-10-01 01:57:21.372618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.675 [2024-10-01 01:57:21.474436] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:41.675 [2024-10-01 01:57:21.474691] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:41.675 [2024-10-01 01:57:21.475125] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:41.675 [2024-10-01 01:57:21.475711] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:41.675 [2024-10-01 01:57:21.475985] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:41.675 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:42.242 [2024-10-01 01:57:21.833413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.242 01:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:42.500 01:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:42.500 01:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:42.758 01:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:42.759 01:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:43.018 01:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:43.018 01:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:43.275 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:43.275 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:43.534 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:43.793 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:43.793 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:44.363 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:44.363 01:57:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:44.363 01:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:44.363 01:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:44.929 01:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:45.187 01:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:45.187 01:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:45.446 01:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:45.446 01:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:45.705 01:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.964 [2024-10-01 01:57:25.649589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.964 01:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:46.222 01:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:46.483 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:46.741 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:46.741 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:46.741 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:46.741 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:46.741 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:46.741 01:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:48.643 01:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:48.643 [global] 00:39:48.643 thread=1 00:39:48.643 invalidate=1 00:39:48.643 rw=write 00:39:48.643 time_based=1 00:39:48.643 runtime=1 00:39:48.643 ioengine=libaio 00:39:48.643 direct=1 00:39:48.643 bs=4096 00:39:48.643 iodepth=1 00:39:48.643 norandommap=0 00:39:48.643 numjobs=1 00:39:48.643 00:39:48.643 verify_dump=1 00:39:48.643 verify_backlog=512 00:39:48.643 verify_state_save=0 00:39:48.643 do_verify=1 00:39:48.643 verify=crc32c-intel 00:39:48.643 [job0] 00:39:48.643 filename=/dev/nvme0n1 00:39:48.643 [job1] 00:39:48.643 filename=/dev/nvme0n2 00:39:48.643 [job2] 00:39:48.643 filename=/dev/nvme0n3 00:39:48.643 [job3] 00:39:48.643 filename=/dev/nvme0n4 00:39:48.643 Could not set queue depth (nvme0n1) 00:39:48.643 Could not set queue depth (nvme0n2) 00:39:48.643 Could not set queue depth (nvme0n3) 00:39:48.643 Could not set queue depth (nvme0n4) 00:39:48.901 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.901 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.901 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.901 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:48.901 fio-3.35 00:39:48.901 Starting 4 threads 00:39:50.274 00:39:50.274 job0: (groupid=0, jobs=1): err= 0: pid=1109317: Tue Oct 1 01:57:29 2024 00:39:50.274 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:50.274 slat (nsec): min=4766, max=38679, avg=10551.21, stdev=3797.97 00:39:50.274 clat (usec): min=274, max=534, avg=341.96, stdev=46.98 00:39:50.274 lat (usec): min=279, max=549, avg=352.51, stdev=48.83 00:39:50.274 clat percentiles (usec): 00:39:50.274 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:39:50.274 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 338], 60.00th=[ 383], 00:39:50.274 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 392], 95.00th=[ 400], 00:39:50.274 | 99.00th=[ 449], 99.50th=[ 453], 99.90th=[ 474], 99.95th=[ 537], 00:39:50.274 | 99.99th=[ 537] 00:39:50.274 write: IOPS=1850, BW=7401KiB/s (7578kB/s)(7408KiB/1001msec); 0 zone resets 00:39:50.274 slat (nsec): min=6316, max=47925, avg=10225.98, stdev=5256.65 00:39:50.274 clat (usec): min=185, max=594, avg=232.27, stdev=75.09 00:39:50.274 lat (usec): min=192, max=611, avg=242.50, stdev=77.49 00:39:50.274 clat percentiles (usec): 00:39:50.274 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 194], 00:39:50.274 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:39:50.274 | 70.00th=[ 212], 80.00th=[ 245], 90.00th=[ 379], 95.00th=[ 445], 00:39:50.274 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[ 594], 00:39:50.274 | 99.99th=[ 594] 00:39:50.274 bw ( KiB/s): min= 8175, max= 8175, per=61.53%, avg=8175.00, stdev= 0.00, samples=1 00:39:50.274 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:39:50.274 lat (usec) : 250=44.86%, 500=54.78%, 750=0.35% 00:39:50.274 cpu : usr=2.20%, sys=3.20%, ctx=3390, majf=0, minf=1 00:39:50.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.274 issued rwts: total=1536,1852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.274 job1: (groupid=0, jobs=1): err= 0: pid=1109318: Tue Oct 1 01:57:29 2024 00:39:50.274 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:39:50.274 slat (nsec): min=7163, max=33205, avg=13950.55, stdev=4574.09 00:39:50.274 clat (usec): min=40915, max=41308, avg=40995.65, stdev=74.68 00:39:50.274 lat (usec): min=40945, max=41315, avg=41009.60, stdev=72.49 00:39:50.274 clat percentiles (usec): 00:39:50.274 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:50.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:50.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:50.274 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:50.274 | 99.99th=[41157] 00:39:50.274 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:39:50.274 slat (nsec): min=7055, max=32459, avg=9049.05, stdev=3110.65 00:39:50.274 clat (usec): min=180, max=380, avg=217.56, stdev=24.64 00:39:50.274 lat (usec): min=189, max=403, avg=226.61, stdev=25.35 00:39:50.274 clat percentiles (usec): 00:39:50.274 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:39:50.274 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 219], 00:39:50.274 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 262], 00:39:50.274 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 379], 99.95th=[ 379], 00:39:50.274 | 99.99th=[ 379] 00:39:50.274 bw ( KiB/s): min= 4087, max= 4087, per=30.76%, avg=4087.00, stdev= 0.00, samples=1 00:39:50.274 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:50.274 lat (usec) : 250=85.21%, 500=10.67% 00:39:50.274 lat (msec) : 50=4.12% 00:39:50.274 cpu : usr=0.59%, sys=0.29%, ctx=534, majf=0, minf=2 00:39:50.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.274 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.274 job2: (groupid=0, jobs=1): err= 0: pid=1109322: Tue Oct 1 01:57:29 2024 00:39:50.274 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:39:50.274 slat (nsec): min=6967, max=27589, avg=14414.10, stdev=3499.18 00:39:50.274 clat (usec): min=40507, max=41130, avg=40964.21, stdev=122.17 00:39:50.274 lat (usec): min=40514, max=41147, avg=40978.62, stdev=122.96 00:39:50.274 clat percentiles (usec): 00:39:50.274 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:50.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:50.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:50.274 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:50.274 | 99.99th=[41157] 00:39:50.274 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:39:50.274 slat (nsec): min=7103, max=27755, avg=9325.79, stdev=2301.55 00:39:50.274 clat (usec): min=184, max=406, avg=271.30, stdev=46.21 00:39:50.274 lat (usec): min=192, max=416, avg=280.62, stdev=46.53 00:39:50.274 clat percentiles (usec): 00:39:50.274 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 233], 00:39:50.275 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 281], 00:39:50.275 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 347], 00:39:50.275 | 99.00th=[ 367], 99.50th=[ 400], 99.90th=[ 408], 99.95th=[ 408], 00:39:50.275 | 99.99th=[ 408] 00:39:50.275 bw ( KiB/s): min= 4087, max= 4087, per=30.76%, avg=4087.00, stdev= 0.00, samples=1 00:39:50.275 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:50.275 lat (usec) : 250=32.46%, 500=63.60% 00:39:50.275 lat (msec) : 50=3.94% 00:39:50.275 cpu : usr=0.10%, sys=0.80%, ctx=534, majf=0, minf=1 00:39:50.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.275 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.275 job3: (groupid=0, jobs=1): err= 0: pid=1109326: Tue Oct 1 01:57:29 2024 00:39:50.275 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:39:50.275 slat (nsec): min=7860, max=35627, avg=15623.43, stdev=6379.54 00:39:50.275 clat (usec): min=40874, max=41077, avg=40975.93, stdev=44.34 00:39:50.275 lat (usec): min=40910, max=41091, avg=40991.56, stdev=42.41 00:39:50.275 clat percentiles (usec): 00:39:50.275 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:50.275 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:50.275 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:50.275 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:50.275 | 99.99th=[41157] 00:39:50.275 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:39:50.275 slat (nsec): min=7508, max=34156, avg=9378.28, stdev=2930.66 00:39:50.275 clat (usec): min=197, max=416, avg=276.51, stdev=37.22 00:39:50.275 lat (usec): min=206, max=426, avg=285.89, stdev=37.15 00:39:50.275 clat percentiles (usec): 00:39:50.275 | 1.00th=[ 212], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:39:50.275 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 277], 00:39:50.275 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 338], 00:39:50.275 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 416], 99.95th=[ 416], 00:39:50.275 | 99.99th=[ 416] 00:39:50.275 bw ( KiB/s): min= 4087, max= 4087, per=30.76%, avg=4087.00, stdev= 0.00, samples=1 00:39:50.275 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:50.275 lat (usec) : 250=26.45%, 500=69.61% 00:39:50.275 lat (msec) : 50=3.94% 00:39:50.275 cpu : usr=0.40%, sys=0.50%, ctx=533, majf=0, minf=1 00:39:50.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.275 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.275 00:39:50.275 Run status group 0 (all jobs): 00:39:50.275 READ: bw=6275KiB/s (6425kB/s), 83.2KiB/s-6138KiB/s (85.2kB/s-6285kB/s), io=6400KiB (6554kB), run=1001-1020msec 00:39:50.275 WRITE: bw=13.0MiB/s (13.6MB/s), 2008KiB/s-7401KiB/s (2056kB/s-7578kB/s), io=13.2MiB (13.9MB), run=1001-1020msec 00:39:50.275 00:39:50.275 Disk stats (read/write): 00:39:50.275 nvme0n1: ios=1344/1536, merge=0/0, ticks=1425/350, in_queue=1775, util=97.70% 00:39:50.275 nvme0n2: ios=37/512, merge=0/0, ticks=726/109, in_queue=835, util=86.88% 00:39:50.275 nvme0n3: ios=46/512, merge=0/0, ticks=1683/134, in_queue=1817, util=97.80% 00:39:50.275 nvme0n4: ios=17/512, merge=0/0, ticks=697/139, in_queue=836, util=89.66% 00:39:50.275 01:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:50.275 [global] 00:39:50.275 thread=1 00:39:50.275 invalidate=1 00:39:50.275 rw=randwrite 00:39:50.275 time_based=1 00:39:50.275 runtime=1 00:39:50.275 ioengine=libaio 00:39:50.275 direct=1 00:39:50.275 bs=4096 00:39:50.275 iodepth=1 00:39:50.275 norandommap=0 00:39:50.275 numjobs=1 00:39:50.275 00:39:50.275 verify_dump=1 00:39:50.275 verify_backlog=512 00:39:50.275 verify_state_save=0 00:39:50.275 do_verify=1 00:39:50.275 verify=crc32c-intel 00:39:50.275 [job0] 00:39:50.275 filename=/dev/nvme0n1 00:39:50.275 [job1] 00:39:50.275 filename=/dev/nvme0n2 00:39:50.275 [job2] 00:39:50.275 filename=/dev/nvme0n3 00:39:50.275 [job3] 00:39:50.275 filename=/dev/nvme0n4 00:39:50.275 Could not set queue depth (nvme0n1) 00:39:50.275 Could not set queue depth (nvme0n2) 00:39:50.275 Could not set queue depth (nvme0n3) 00:39:50.275 Could not set queue depth (nvme0n4) 00:39:50.275 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.275 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.275 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.275 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.275 fio-3.35 00:39:50.275 Starting 4 threads 00:39:51.655 00:39:51.655 job0: (groupid=0, jobs=1): err= 0: pid=1109664: Tue Oct 1 01:57:31 2024 00:39:51.655 read: IOPS=1028, BW=4116KiB/s (4214kB/s)(4128KiB/1003msec) 00:39:51.655 slat (nsec): min=4614, max=35300, avg=13003.89, stdev=4710.77 00:39:51.655 clat (usec): min=252, max=41053, avg=608.93, stdev=3568.41 00:39:51.655 lat (usec): min=257, max=41067, avg=621.93, stdev=3568.43 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:39:51.655 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:39:51.655 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 379], 00:39:51.655 | 99.00th=[ 416], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:51.655 | 99.99th=[41157] 00:39:51.655 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:39:51.655 slat (nsec): min=6189, max=55739, avg=14396.53, stdev=6799.95 00:39:51.655 clat (usec): min=167, max=1136, avg=214.02, stdev=40.18 00:39:51.655 lat (usec): min=174, max=1151, avg=228.42, stdev=43.51 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:39:51.655 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:39:51.655 | 70.00th=[ 225], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:39:51.655 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 343], 99.95th=[ 1139], 00:39:51.655 | 99.99th=[ 1139] 00:39:51.655 bw ( KiB/s): min= 4096, max= 8192, per=26.78%, avg=6144.00, stdev=2896.31, samples=2 00:39:51.655 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:39:51.655 lat (usec) : 250=48.40%, 500=51.21%, 750=0.04% 00:39:51.655 lat (msec) : 2=0.04%, 50=0.31% 00:39:51.655 cpu : usr=1.50%, sys=4.49%, ctx=2568, majf=0, minf=2 00:39:51.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.655 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:51.655 job1: (groupid=0, jobs=1): err= 0: pid=1109665: Tue Oct 1 01:57:31 2024 00:39:51.655 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:51.655 slat (nsec): min=5422, max=36623, avg=11329.27, stdev=5729.44 00:39:51.655 clat (usec): min=269, max=622, avg=339.95, stdev=63.28 00:39:51.655 lat (usec): min=276, max=629, avg=351.28, stdev=63.16 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:39:51.655 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:39:51.655 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 453], 95.00th=[ 469], 00:39:51.655 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 619], 99.95th=[ 627], 00:39:51.655 | 99.99th=[ 627] 00:39:51.655 write: IOPS=1843, BW=7373KiB/s (7550kB/s)(7380KiB/1001msec); 0 zone resets 00:39:51.655 slat (nsec): min=6754, max=70615, avg=14506.38, stdev=7425.67 00:39:51.655 clat (usec): min=164, max=465, avg=227.90, stdev=47.89 00:39:51.655 lat (usec): min=171, max=509, avg=242.40, stdev=51.03 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:39:51.655 | 30.00th=[ 198], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 229], 00:39:51.655 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 314], 00:39:51.655 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 465], 99.95th=[ 465], 00:39:51.655 | 99.99th=[ 465] 00:39:51.655 bw ( KiB/s): min= 8192, max= 8192, per=35.71%, avg=8192.00, stdev= 0.00, samples=1 00:39:51.655 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:51.655 lat (usec) : 250=42.56%, 500=56.14%, 750=1.30% 00:39:51.655 cpu : usr=3.80%, sys=5.30%, ctx=3381, majf=0, minf=1 00:39:51.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.655 issued rwts: total=1536,1845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:51.655 job2: (groupid=0, jobs=1): err= 0: pid=1109666: Tue Oct 1 01:57:31 2024 00:39:51.655 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:39:51.655 slat (nsec): min=13255, max=33395, avg=21445.17, stdev=9605.81 00:39:51.655 clat (usec): min=424, max=41104, avg=39169.39, stdev=8447.25 00:39:51.655 lat (usec): min=457, max=41118, avg=39190.83, stdev=8444.68 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 424], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:51.655 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:51.655 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:51.655 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:51.655 | 99.99th=[41157] 00:39:51.655 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:39:51.655 slat (nsec): min=6527, max=41164, avg=15083.84, stdev=6298.90 00:39:51.655 clat (usec): min=182, max=411, avg=243.78, stdev=37.15 00:39:51.655 lat (usec): min=191, max=428, avg=258.86, stdev=37.38 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 219], 00:39:51.655 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:39:51.655 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 318], 00:39:51.655 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 412], 99.95th=[ 412], 00:39:51.655 | 99.99th=[ 412] 00:39:51.655 bw ( KiB/s): min= 4096, max= 4096, per=17.86%, avg=4096.00, stdev= 0.00, samples=1 00:39:51.655 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:51.655 lat (usec) : 250=66.73%, 500=29.16% 00:39:51.655 lat (msec) : 50=4.11% 00:39:51.655 cpu : usr=0.39%, sys=0.68%, ctx=535, majf=0, minf=1 00:39:51.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.655 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:51.655 job3: (groupid=0, jobs=1): err= 0: pid=1109669: Tue Oct 1 01:57:31 2024 00:39:51.655 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:39:51.655 slat (nsec): min=5887, max=48301, avg=11932.24, stdev=6438.14 00:39:51.655 clat (usec): min=252, max=1344, avg=297.72, stdev=48.92 00:39:51.655 lat (usec): min=259, max=1358, avg=309.66, stdev=51.90 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 273], 00:39:51.655 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:39:51.655 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 343], 00:39:51.655 | 99.00th=[ 424], 99.50th=[ 457], 99.90th=[ 1037], 99.95th=[ 1352], 00:39:51.655 | 99.99th=[ 1352] 00:39:51.655 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:51.655 slat (nsec): min=7027, max=46420, avg=14678.48, stdev=7500.99 00:39:51.655 clat (usec): min=178, max=425, avg=219.47, stdev=25.09 00:39:51.655 lat (usec): min=186, max=447, avg=234.15, stdev=30.07 00:39:51.655 clat percentiles (usec): 00:39:51.655 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:39:51.655 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 227], 00:39:51.655 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 258], 00:39:51.655 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 396], 99.95th=[ 400], 00:39:51.655 | 99.99th=[ 424] 00:39:51.655 bw ( KiB/s): min= 8192, max= 8192, per=35.71%, avg=8192.00, stdev= 0.00, samples=1 00:39:51.655 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:51.655 lat (usec) : 250=50.08%, 500=49.70%, 750=0.14% 00:39:51.655 lat (msec) : 2=0.08% 00:39:51.655 cpu : usr=3.90%, sys=5.90%, ctx=3679, majf=0, minf=1 00:39:51.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.656 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:51.656 00:39:51.656 Run status group 0 (all jobs): 00:39:51.656 READ: bw=15.9MiB/s (16.7MB/s), 88.8KiB/s-6505KiB/s (90.9kB/s-6662kB/s), io=16.5MiB (17.3MB), run=1001-1036msec 00:39:51.656 WRITE: bw=22.4MiB/s (23.5MB/s), 1977KiB/s-8184KiB/s (2024kB/s-8380kB/s), io=23.2MiB (24.3MB), run=1001-1036msec 00:39:51.656 00:39:51.656 Disk stats (read/write): 00:39:51.656 nvme0n1: ios=1078/1536, merge=0/0, ticks=474/313, in_queue=787, util=87.07% 00:39:51.656 nvme0n2: ios=1396/1536, merge=0/0, ticks=779/330, in_queue=1109, util=91.37% 00:39:51.656 nvme0n3: ios=48/512, merge=0/0, ticks=848/119, in_queue=967, util=100.00% 00:39:51.656 nvme0n4: ios=1522/1536, merge=0/0, ticks=1364/326, in_queue=1690, util=97.16% 00:39:51.656 01:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:51.656 [global] 00:39:51.656 thread=1 00:39:51.656 invalidate=1 00:39:51.656 rw=write 00:39:51.656 time_based=1 00:39:51.656 runtime=1 00:39:51.656 ioengine=libaio 00:39:51.656 direct=1 00:39:51.656 bs=4096 00:39:51.656 iodepth=128 00:39:51.656 norandommap=0 00:39:51.656 numjobs=1 00:39:51.656 00:39:51.656 verify_dump=1 00:39:51.656 verify_backlog=512 00:39:51.656 verify_state_save=0 00:39:51.656 do_verify=1 00:39:51.656 verify=crc32c-intel 00:39:51.656 [job0] 00:39:51.656 filename=/dev/nvme0n1 00:39:51.656 [job1] 00:39:51.656 filename=/dev/nvme0n2 00:39:51.656 [job2] 00:39:51.656 filename=/dev/nvme0n3 00:39:51.656 [job3] 00:39:51.656 filename=/dev/nvme0n4 00:39:51.656 Could not set queue depth (nvme0n1) 00:39:51.656 Could not set queue depth (nvme0n2) 00:39:51.656 Could not set queue depth (nvme0n3) 00:39:51.656 Could not set queue depth (nvme0n4) 00:39:51.914 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.914 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.914 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.914 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:51.914 fio-3.35 00:39:51.914 Starting 4 threads 00:39:53.294 00:39:53.294 job0: (groupid=0, jobs=1): err= 0: pid=1109897: Tue Oct 1 01:57:32 2024 00:39:53.294 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:39:53.294 slat (usec): min=2, max=6466, avg=95.89, stdev=510.01 00:39:53.294 clat (usec): min=4108, max=26182, avg=13137.70, stdev=3290.87 00:39:53.294 lat (usec): min=4113, max=26189, avg=13233.59, stdev=3318.18 00:39:53.294 clat percentiles (usec): 00:39:53.294 | 1.00th=[ 4752], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10683], 00:39:53.294 | 30.00th=[11076], 40.00th=[12125], 50.00th=[13173], 60.00th=[13829], 00:39:53.294 | 70.00th=[14484], 80.00th=[15664], 90.00th=[18744], 95.00th=[19006], 00:39:53.294 | 99.00th=[20317], 99.50th=[20317], 99.90th=[22938], 99.95th=[23200], 00:39:53.294 | 99.99th=[26084] 00:39:53.294 write: IOPS=4922, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1003msec); 0 zone resets 00:39:53.294 slat (usec): min=3, max=32012, avg=96.70, stdev=766.09 00:39:53.294 clat (usec): min=1420, max=57920, avg=12746.29, stdev=4332.73 00:39:53.294 lat (usec): min=2488, max=57930, avg=12842.99, stdev=4394.22 00:39:53.294 clat percentiles (usec): 00:39:53.294 | 1.00th=[ 4621], 5.00th=[ 7701], 10.00th=[ 9110], 20.00th=[10290], 00:39:53.294 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12518], 60.00th=[13304], 00:39:53.294 | 70.00th=[13566], 80.00th=[13960], 90.00th=[16319], 95.00th=[17171], 00:39:53.294 | 99.00th=[27657], 99.50th=[37487], 99.90th=[55837], 99.95th=[57934], 00:39:53.294 | 99.99th=[57934] 00:39:53.294 bw ( KiB/s): min=19200, max=19310, per=31.20%, avg=19255.00, stdev=77.78, samples=2 00:39:53.294 iops : min= 4800, max= 4827, avg=4813.50, stdev=19.09, samples=2 00:39:53.294 lat (msec) : 2=0.01%, 4=0.01%, 10=15.89%, 20=82.45%, 50=1.46% 00:39:53.294 lat (msec) : 100=0.18% 00:39:53.294 cpu : usr=6.69%, sys=8.18%, ctx=311, majf=0, minf=1 00:39:53.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:53.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.295 issued rwts: total=4608,4937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.295 job1: (groupid=0, jobs=1): err= 0: pid=1109898: Tue Oct 1 01:57:32 2024 00:39:53.295 read: IOPS=2681, BW=10.5MiB/s (11.0MB/s)(10.9MiB/1045msec) 00:39:53.295 slat (usec): min=2, max=50346, avg=164.37, stdev=1264.45 00:39:53.295 clat (usec): min=5970, max=93418, avg=20688.64, stdev=14571.91 00:39:53.295 lat (usec): min=5980, max=93423, avg=20853.02, stdev=14628.64 00:39:53.295 clat percentiles (usec): 00:39:53.295 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10814], 00:39:53.295 | 30.00th=[12256], 40.00th=[13829], 50.00th=[14615], 60.00th=[16712], 00:39:53.295 | 70.00th=[20841], 80.00th=[28967], 90.00th=[42206], 95.00th=[51643], 00:39:53.295 | 99.00th=[70779], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:39:53.295 | 99.99th=[93848] 00:39:53.295 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:39:53.295 slat (usec): min=3, max=12990, avg=167.56, stdev=841.94 00:39:53.295 clat (usec): min=5800, max=71292, avg=24149.11, stdev=16739.19 00:39:53.295 lat (usec): min=5810, max=77914, avg=24316.67, stdev=16834.67 00:39:53.295 clat percentiles (usec): 00:39:53.295 | 1.00th=[ 7898], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[12125], 00:39:53.295 | 30.00th=[13960], 40.00th=[16057], 50.00th=[16909], 60.00th=[18744], 00:39:53.295 | 70.00th=[25035], 80.00th=[30540], 90.00th=[53740], 95.00th=[57934], 00:39:53.295 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:39:53.295 | 99.99th=[70779] 00:39:53.295 bw ( KiB/s): min= 8328, max=16248, per=19.91%, avg=12288.00, stdev=5600.29, samples=2 00:39:53.295 iops : min= 2082, max= 4062, avg=3072.00, stdev=1400.07, samples=2 00:39:53.295 lat (msec) : 10=10.11%, 20=54.27%, 50=23.87%, 100=11.75% 00:39:53.295 cpu : usr=1.63%, sys=6.42%, ctx=292, majf=0, minf=1 00:39:53.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:39:53.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.295 issued rwts: total=2802,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.295 job2: (groupid=0, jobs=1): err= 0: pid=1109899: Tue Oct 1 01:57:32 2024 00:39:53.295 read: IOPS=3617, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1002msec) 00:39:53.295 slat (usec): min=3, max=16811, avg=135.28, stdev=926.54 00:39:53.295 clat (usec): min=1584, max=38855, avg=16599.13, stdev=5813.30 00:39:53.295 lat (usec): min=1593, max=38877, avg=16734.41, stdev=5852.37 00:39:53.295 clat percentiles (usec): 00:39:53.295 | 1.00th=[ 3949], 5.00th=[ 8586], 10.00th=[10683], 20.00th=[12256], 00:39:53.295 | 30.00th=[12911], 40.00th=[13698], 50.00th=[15401], 60.00th=[17433], 00:39:53.295 | 70.00th=[19792], 80.00th=[21890], 90.00th=[24511], 95.00th=[27132], 00:39:53.295 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[35390], 00:39:53.295 | 99.99th=[39060] 00:39:53.295 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:39:53.295 slat (usec): min=4, max=23384, avg=114.35, stdev=817.45 00:39:53.295 clat (usec): min=3142, max=55326, avg=16155.56, stdev=6509.29 00:39:53.295 lat (usec): min=3161, max=55335, avg=16269.91, stdev=6552.06 00:39:53.295 clat percentiles (usec): 00:39:53.295 | 1.00th=[ 4621], 5.00th=[ 8094], 10.00th=[10421], 20.00th=[11863], 00:39:53.295 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13960], 60.00th=[14877], 00:39:53.295 | 70.00th=[17171], 80.00th=[21365], 90.00th=[24773], 95.00th=[26608], 00:39:53.295 | 99.00th=[38011], 99.50th=[47449], 99.90th=[49546], 99.95th=[55313], 00:39:53.295 | 99.99th=[55313] 00:39:53.295 bw ( KiB/s): min=15696, max=16384, per=25.99%, avg=16040.00, stdev=486.49, samples=2 00:39:53.295 iops : min= 3924, max= 4096, avg=4010.00, stdev=121.62, samples=2 00:39:53.295 lat (msec) : 2=0.06%, 4=0.60%, 10=7.72%, 20=65.35%, 50=26.23% 00:39:53.295 lat (msec) : 100=0.04% 00:39:53.295 cpu : usr=4.10%, sys=8.59%, ctx=383, majf=0, minf=1 00:39:53.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:53.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.295 issued rwts: total=3625,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.295 job3: (groupid=0, jobs=1): err= 0: pid=1109900: Tue Oct 1 01:57:32 2024 00:39:53.295 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:39:53.295 slat (usec): min=3, max=21162, avg=127.61, stdev=900.20 00:39:53.295 clat (usec): min=8838, max=55017, avg=17051.62, stdev=8405.49 00:39:53.295 lat (usec): min=8867, max=55033, avg=17179.23, stdev=8464.55 00:39:53.295 clat percentiles (usec): 00:39:53.295 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10945], 20.00th=[11863], 00:39:53.295 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13042], 60.00th=[14877], 00:39:53.295 | 70.00th=[17171], 80.00th=[21365], 90.00th=[29230], 95.00th=[36439], 00:39:53.295 | 99.00th=[49546], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:39:53.295 | 99.99th=[54789] 00:39:53.295 write: IOPS=3999, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:39:53.295 slat (usec): min=4, max=23054, avg=123.10, stdev=983.63 00:39:53.295 clat (usec): min=404, max=62724, avg=16510.91, stdev=9384.36 00:39:53.295 lat (usec): min=459, max=62738, avg=16634.01, stdev=9491.63 00:39:53.295 clat percentiles (usec): 00:39:53.295 | 1.00th=[ 1942], 5.00th=[ 5735], 10.00th=[10552], 20.00th=[11731], 00:39:53.295 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:39:53.295 | 70.00th=[15533], 80.00th=[21890], 90.00th=[29230], 95.00th=[37487], 00:39:53.295 | 99.00th=[47973], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:39:53.295 | 99.99th=[62653] 00:39:53.295 bw ( KiB/s): min=13808, max=17328, per=25.22%, avg=15568.00, stdev=2489.02, samples=2 00:39:53.295 iops : min= 3452, max= 4332, avg=3892.00, stdev=622.25, samples=2 00:39:53.295 lat (usec) : 500=0.07%, 750=0.05%, 1000=0.05% 00:39:53.295 lat (msec) : 2=0.51%, 4=0.16%, 10=5.75%, 20=68.46%, 50=24.14% 00:39:53.295 lat (msec) : 100=0.82% 00:39:53.295 cpu : usr=4.98%, sys=7.27%, ctx=343, majf=0, minf=1 00:39:53.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:53.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:53.295 issued rwts: total=3584,4019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:53.295 00:39:53.295 Run status group 0 (all jobs): 00:39:53.295 READ: bw=54.6MiB/s (57.3MB/s), 10.5MiB/s-17.9MiB/s (11.0MB/s-18.8MB/s), io=57.1MiB (59.9MB), run=1002-1045msec 00:39:53.295 WRITE: bw=60.3MiB/s (63.2MB/s), 11.5MiB/s-19.2MiB/s (12.0MB/s-20.2MB/s), io=63.0MiB (66.0MB), run=1002-1045msec 00:39:53.295 00:39:53.295 Disk stats (read/write): 00:39:53.295 nvme0n1: ios=3915/4096, merge=0/0, ticks=24577/27046, in_queue=51623, util=99.20% 00:39:53.295 nvme0n2: ios=2101/2560, merge=0/0, ticks=15437/16345, in_queue=31782, util=99.19% 00:39:53.295 nvme0n3: ios=3122/3254, merge=0/0, ticks=42674/44770, in_queue=87444, util=99.17% 00:39:53.295 nvme0n4: ios=3081/3508, merge=0/0, ticks=30345/34826, in_queue=65171, util=91.08% 00:39:53.295 01:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:53.295 [global] 00:39:53.295 thread=1 00:39:53.295 invalidate=1 00:39:53.295 rw=randwrite 00:39:53.295 time_based=1 00:39:53.295 runtime=1 00:39:53.295 ioengine=libaio 00:39:53.295 direct=1 00:39:53.295 bs=4096 00:39:53.295 iodepth=128 00:39:53.295 norandommap=0 00:39:53.295 numjobs=1 00:39:53.295 00:39:53.295 verify_dump=1 00:39:53.295 verify_backlog=512 00:39:53.295 verify_state_save=0 00:39:53.295 do_verify=1 00:39:53.295 verify=crc32c-intel 00:39:53.295 [job0] 00:39:53.295 filename=/dev/nvme0n1 00:39:53.295 [job1] 00:39:53.295 filename=/dev/nvme0n2 00:39:53.295 [job2] 00:39:53.295 filename=/dev/nvme0n3 00:39:53.295 [job3] 00:39:53.295 filename=/dev/nvme0n4 00:39:53.295 Could not set queue depth (nvme0n1) 00:39:53.295 Could not set queue depth (nvme0n2) 00:39:53.295 Could not set queue depth (nvme0n3) 00:39:53.295 Could not set queue depth (nvme0n4) 00:39:53.295 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:53.295 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:53.295 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:53.295 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:53.295 fio-3.35 00:39:53.295 Starting 4 threads 00:39:54.672 00:39:54.672 job0: (groupid=0, jobs=1): err= 0: pid=1110130: Tue Oct 1 01:57:34 2024 00:39:54.672 read: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1012msec) 00:39:54.672 slat (usec): min=2, max=13250, avg=137.24, stdev=864.52 00:39:54.672 clat (usec): min=1878, max=48856, avg=15112.60, stdev=6984.77 00:39:54.672 lat (usec): min=3013, max=48861, avg=15249.84, stdev=7059.82 00:39:54.672 clat percentiles (usec): 00:39:54.672 | 1.00th=[ 6259], 5.00th=[10290], 10.00th=[10683], 20.00th=[10814], 00:39:54.672 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[15139], 00:39:54.672 | 70.00th=[16057], 80.00th=[16909], 90.00th=[20317], 95.00th=[31065], 00:39:54.672 | 99.00th=[44303], 99.50th=[45351], 99.90th=[49021], 99.95th=[49021], 00:39:54.672 | 99.99th=[49021] 00:39:54.672 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:39:54.672 slat (usec): min=3, max=16720, avg=153.01, stdev=815.36 00:39:54.672 clat (usec): min=1138, max=51520, avg=22317.59, stdev=11848.31 00:39:54.672 lat (usec): min=1150, max=51527, avg=22470.60, stdev=11934.78 00:39:54.672 clat percentiles (usec): 00:39:54.672 | 1.00th=[ 3720], 5.00th=[ 8356], 10.00th=[11207], 20.00th=[11731], 00:39:54.672 | 30.00th=[12518], 40.00th=[15664], 50.00th=[17957], 60.00th=[23987], 00:39:54.672 | 70.00th=[27919], 80.00th=[38011], 90.00th=[39584], 95.00th=[41157], 00:39:54.672 | 99.00th=[48497], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:39:54.672 | 99.99th=[51643] 00:39:54.672 bw ( KiB/s): min=11544, max=17088, per=24.18%, avg=14316.00, stdev=3920.20, samples=2 00:39:54.672 iops : min= 2886, max= 4272, avg=3579.00, stdev=980.05, samples=2 00:39:54.672 lat (msec) : 2=0.04%, 4=0.80%, 10=4.25%, 20=65.05%, 50=29.65% 00:39:54.672 lat (msec) : 100=0.21% 00:39:54.672 cpu : usr=2.47%, sys=4.35%, ctx=399, majf=0, minf=1 00:39:54.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:54.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:54.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:54.672 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:54.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:54.672 job1: (groupid=0, jobs=1): err= 0: pid=1110131: Tue Oct 1 01:57:34 2024 00:39:54.672 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:39:54.672 slat (usec): min=3, max=11446, avg=106.45, stdev=624.69 00:39:54.672 clat (usec): min=5724, max=39907, avg=13680.33, stdev=5183.40 00:39:54.672 lat (usec): min=5729, max=39921, avg=13786.78, stdev=5236.32 00:39:54.672 clat percentiles (usec): 00:39:54.672 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10421], 00:39:54.672 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11731], 60.00th=[12780], 00:39:54.672 | 70.00th=[13698], 80.00th=[16712], 90.00th=[19792], 95.00th=[25822], 00:39:54.672 | 99.00th=[31589], 99.50th=[31589], 99.90th=[33424], 99.95th=[34866], 00:39:54.672 | 99.99th=[40109] 00:39:54.672 write: IOPS=4722, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1006msec); 0 zone resets 00:39:54.672 slat (usec): min=4, max=11289, avg=100.60, stdev=564.65 00:39:54.672 clat (usec): min=4996, max=44763, avg=13537.52, stdev=5680.88 00:39:54.672 lat (usec): min=5549, max=44772, avg=13638.12, stdev=5729.62 00:39:54.672 clat percentiles (usec): 00:39:54.672 | 1.00th=[ 7504], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10814], 00:39:54.672 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[12125], 00:39:54.672 | 70.00th=[12649], 80.00th=[14877], 90.00th=[19530], 95.00th=[24249], 00:39:54.672 | 99.00th=[41157], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:39:54.672 | 99.99th=[44827] 00:39:54.672 bw ( KiB/s): min=17628, max=19328, per=31.21%, avg=18478.00, stdev=1202.08, samples=2 00:39:54.672 iops : min= 4407, max= 4832, avg=4619.50, stdev=300.52, samples=2 00:39:54.672 lat (msec) : 10=12.01%, 20=78.77%, 50=9.22% 00:39:54.672 cpu : usr=3.98%, sys=6.57%, ctx=528, majf=0, minf=1 00:39:54.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:54.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:54.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:54.672 issued rwts: total=4608,4751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:54.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:54.672 job2: (groupid=0, jobs=1): err= 0: pid=1110132: Tue Oct 1 01:57:34 2024 00:39:54.672 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:39:54.672 slat (usec): min=2, max=29961, avg=155.96, stdev=1215.63 00:39:54.672 clat (usec): min=5730, max=58547, avg=19502.87, stdev=6463.15 00:39:54.672 lat (usec): min=5736, max=58575, avg=19658.83, stdev=6562.45 00:39:54.672 clat percentiles (usec): 00:39:54.672 | 1.00th=[ 8979], 5.00th=[11600], 10.00th=[13173], 20.00th=[14615], 00:39:54.672 | 30.00th=[15401], 40.00th=[16450], 50.00th=[17957], 60.00th=[19268], 00:39:54.672 | 70.00th=[21365], 80.00th=[23200], 90.00th=[30540], 95.00th=[33424], 00:39:54.672 | 99.00th=[35914], 99.50th=[38536], 99.90th=[40633], 99.95th=[44827], 00:39:54.672 | 99.99th=[58459] 00:39:54.672 write: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:39:54.672 slat (usec): min=3, max=22997, avg=182.00, stdev=972.72 00:39:54.672 clat (usec): min=782, max=67860, avg=25446.54, stdev=13380.63 00:39:54.672 lat (usec): min=3141, max=67919, avg=25628.54, stdev=13454.48 00:39:54.672 clat percentiles (usec): 00:39:54.672 | 1.00th=[ 6325], 5.00th=[10421], 10.00th=[11994], 20.00th=[15139], 00:39:54.672 | 30.00th=[17433], 40.00th=[18744], 50.00th=[19530], 60.00th=[24249], 00:39:54.672 | 70.00th=[30540], 80.00th=[36439], 90.00th=[47973], 95.00th=[51119], 00:39:54.672 | 99.00th=[62653], 99.50th=[65274], 99.90th=[66323], 99.95th=[66323], 00:39:54.672 | 99.99th=[67634] 00:39:54.673 bw ( KiB/s): min=10944, max=12528, per=19.82%, avg=11736.00, stdev=1120.06, samples=2 00:39:54.673 iops : min= 2736, max= 3132, avg=2934.00, stdev=280.01, samples=2 00:39:54.673 lat (usec) : 1000=0.02% 00:39:54.673 lat (msec) : 4=0.14%, 10=3.31%, 20=55.85%, 50=37.26%, 100=3.42% 00:39:54.673 cpu : usr=2.08%, sys=3.07%, ctx=277, majf=0, minf=1 00:39:54.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:54.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:54.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:54.673 issued rwts: total=2560,3062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:54.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:54.673 job3: (groupid=0, jobs=1): err= 0: pid=1110133: Tue Oct 1 01:57:34 2024 00:39:54.673 read: IOPS=3174, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:39:54.673 slat (usec): min=2, max=13260, avg=134.65, stdev=754.00 00:39:54.673 clat (usec): min=2139, max=69087, avg=17809.70, stdev=6838.14 00:39:54.673 lat (usec): min=2143, max=69096, avg=17944.35, stdev=6888.94 00:39:54.673 clat percentiles (usec): 00:39:54.673 | 1.00th=[ 4228], 5.00th=[ 9765], 10.00th=[12780], 20.00th=[14353], 00:39:54.673 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15926], 60.00th=[16909], 00:39:54.673 | 70.00th=[17957], 80.00th=[19792], 90.00th=[27395], 95.00th=[33162], 00:39:54.673 | 99.00th=[39584], 99.50th=[52167], 99.90th=[59507], 99.95th=[59507], 00:39:54.673 | 99.99th=[68682] 00:39:54.673 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:39:54.673 slat (usec): min=3, max=27499, avg=151.81, stdev=1023.46 00:39:54.673 clat (usec): min=7432, max=66889, avg=19577.73, stdev=10385.97 00:39:54.673 lat (usec): min=7441, max=66897, avg=19729.54, stdev=10468.04 00:39:54.673 clat percentiles (usec): 00:39:54.673 | 1.00th=[10159], 5.00th=[12911], 10.00th=[14091], 20.00th=[14353], 00:39:54.673 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[15926], 00:39:54.673 | 70.00th=[16909], 80.00th=[20841], 90.00th=[32375], 95.00th=[44827], 00:39:54.673 | 99.00th=[63177], 99.50th=[65274], 99.90th=[66323], 99.95th=[66847], 00:39:54.673 | 99.99th=[66847] 00:39:54.673 bw ( KiB/s): min=13432, max=15160, per=24.14%, avg=14296.00, stdev=1221.88, samples=2 00:39:54.673 iops : min= 3358, max= 3790, avg=3574.00, stdev=305.47, samples=2 00:39:54.673 lat (msec) : 4=0.09%, 10=2.67%, 20=76.71%, 50=18.00%, 100=2.54% 00:39:54.673 cpu : usr=3.29%, sys=4.18%, ctx=390, majf=0, minf=1 00:39:54.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:54.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:54.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:54.673 issued rwts: total=3190,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:54.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:54.673 00:39:54.673 Run status group 0 (all jobs): 00:39:54.673 READ: bw=52.3MiB/s (54.9MB/s), 9.89MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=53.0MiB (55.5MB), run=1005-1012msec 00:39:54.673 WRITE: bw=57.8MiB/s (60.6MB/s), 11.8MiB/s-18.4MiB/s (12.4MB/s-19.3MB/s), io=58.5MiB (61.4MB), run=1005-1012msec 00:39:54.673 00:39:54.673 Disk stats (read/write): 00:39:54.673 nvme0n1: ios=2610/2959, merge=0/0, ticks=37382/68159, in_queue=105541, util=87.17% 00:39:54.673 nvme0n2: ios=3635/4079, merge=0/0, ticks=25398/26856, in_queue=52254, util=94.11% 00:39:54.673 nvme0n3: ios=2097/2560, merge=0/0, ticks=30922/49236, in_queue=80158, util=98.23% 00:39:54.673 nvme0n4: ios=2980/3072, merge=0/0, ticks=22901/23327, in_queue=46228, util=98.43% 00:39:54.673 01:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:54.673 01:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1110265 00:39:54.673 01:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:54.673 01:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:54.673 [global] 00:39:54.673 thread=1 00:39:54.673 invalidate=1 00:39:54.673 rw=read 00:39:54.673 time_based=1 00:39:54.673 runtime=10 00:39:54.673 ioengine=libaio 00:39:54.673 direct=1 00:39:54.673 bs=4096 00:39:54.673 iodepth=1 00:39:54.673 norandommap=1 00:39:54.673 numjobs=1 00:39:54.673 00:39:54.673 [job0] 00:39:54.673 filename=/dev/nvme0n1 00:39:54.673 [job1] 00:39:54.673 filename=/dev/nvme0n2 00:39:54.673 [job2] 00:39:54.673 filename=/dev/nvme0n3 00:39:54.673 [job3] 00:39:54.673 filename=/dev/nvme0n4 00:39:54.673 Could not set queue depth (nvme0n1) 00:39:54.673 Could not set queue depth (nvme0n2) 00:39:54.673 Could not set queue depth (nvme0n3) 00:39:54.673 Could not set queue depth (nvme0n4) 00:39:54.673 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:54.673 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:54.673 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:54.673 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:54.673 fio-3.35 00:39:54.673 Starting 4 threads 00:39:57.953 01:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:57.953 01:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:57.953 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=18419712, buflen=4096 00:39:57.953 fio: pid=1110476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:58.210 01:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.210 01:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:58.210 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28200960, buflen=4096 00:39:58.210 fio: pid=1110475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:58.467 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42790912, buflen=4096 00:39:58.467 fio: pid=1110471, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:58.467 01:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.467 01:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:58.724 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=34611200, buflen=4096 00:39:58.724 fio: pid=1110474, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:58.724 01:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:58.724 01:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:58.724 00:39:58.724 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1110471: Tue Oct 1 01:57:38 2024 00:39:58.724 read: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(40.8MiB/3538msec) 00:39:58.724 slat (usec): min=4, max=15664, avg=15.36, stdev=290.45 00:39:58.724 clat (usec): min=248, max=1880, avg=318.59, stdev=50.16 00:39:58.724 lat (usec): min=254, max=16028, avg=333.95, stdev=297.73 00:39:58.724 clat percentiles (usec): 00:39:58.724 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 289], 00:39:58.724 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:39:58.724 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 383], 00:39:58.724 | 99.00th=[ 498], 99.50th=[ 545], 99.90th=[ 873], 99.95th=[ 979], 00:39:58.724 | 99.99th=[ 1483] 00:39:58.724 bw ( KiB/s): min=11248, max=12848, per=38.03%, avg=11965.33, stdev=680.81, samples=6 00:39:58.724 iops : min= 2812, max= 3212, avg=2991.33, stdev=170.20, samples=6 00:39:58.724 lat (usec) : 250=0.01%, 500=99.00%, 750=0.86%, 1000=0.09% 00:39:58.724 lat (msec) : 2=0.04% 00:39:58.724 cpu : usr=1.53%, sys=4.78%, ctx=10453, majf=0, minf=2 00:39:58.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.724 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.724 issued rwts: total=10448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.724 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1110474: Tue Oct 1 01:57:38 2024 00:39:58.724 read: IOPS=2195, BW=8779KiB/s (8990kB/s)(33.0MiB/3850msec) 00:39:58.724 slat (usec): min=4, max=23326, avg=19.52, stdev=355.56 00:39:58.724 clat (usec): min=259, max=41406, avg=430.64, stdev=1502.25 00:39:58.724 lat (usec): min=267, max=41411, avg=450.16, stdev=1543.64 00:39:58.724 clat percentiles (usec): 00:39:58.724 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 310], 00:39:58.724 | 30.00th=[ 326], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 379], 00:39:58.724 | 70.00th=[ 392], 80.00th=[ 433], 90.00th=[ 469], 95.00th=[ 502], 00:39:58.724 | 99.00th=[ 578], 99.50th=[ 627], 99.90th=[40633], 99.95th=[41157], 00:39:58.724 | 99.99th=[41157] 00:39:58.724 bw ( KiB/s): min= 5288, max=10168, per=27.59%, avg=8678.57, stdev=1869.23, samples=7 00:39:58.724 iops : min= 1322, max= 2542, avg=2169.57, stdev=467.27, samples=7 00:39:58.724 lat (usec) : 500=95.01%, 750=4.65%, 1000=0.07% 00:39:58.724 lat (msec) : 2=0.12%, 50=0.14% 00:39:58.724 cpu : usr=1.33%, sys=3.12%, ctx=8458, majf=0, minf=1 00:39:58.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.724 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.725 issued rwts: total=8451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.725 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1110475: Tue Oct 1 01:57:38 2024 00:39:58.725 read: IOPS=2129, BW=8516KiB/s (8720kB/s)(26.9MiB/3234msec) 00:39:58.725 slat (nsec): min=5217, max=65608, avg=9532.46, stdev=4967.97 00:39:58.725 clat (usec): min=275, max=41393, avg=454.29, stdev=1693.32 00:39:58.725 lat (usec): min=283, max=41400, avg=463.82, stdev=1693.26 00:39:58.725 clat percentiles (usec): 00:39:58.725 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:39:58.725 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 379], 00:39:58.725 | 70.00th=[ 408], 80.00th=[ 441], 90.00th=[ 486], 95.00th=[ 515], 00:39:58.725 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41157], 00:39:58.725 | 99.99th=[41157] 00:39:58.725 bw ( KiB/s): min= 5040, max=10176, per=26.78%, avg=8426.67, stdev=1811.36, samples=6 00:39:58.725 iops : min= 1260, max= 2544, avg=2106.67, stdev=452.84, samples=6 00:39:58.725 lat (usec) : 500=92.58%, 750=7.23% 00:39:58.725 lat (msec) : 50=0.17% 00:39:58.725 cpu : usr=1.24%, sys=3.31%, ctx=6886, majf=0, minf=1 00:39:58.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.725 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.725 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.725 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1110476: Tue Oct 1 01:57:38 2024 00:39:58.725 read: IOPS=1524, BW=6098KiB/s (6244kB/s)(17.6MiB/2950msec) 00:39:58.725 slat (nsec): min=4354, max=67688, avg=13265.24, stdev=9161.36 00:39:58.725 clat (usec): min=272, max=44987, avg=634.46, stdev=3349.83 00:39:58.725 lat (usec): min=281, max=45003, avg=647.72, stdev=3349.97 00:39:58.725 clat percentiles (usec): 00:39:58.725 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:39:58.725 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 371], 00:39:58.725 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 469], 00:39:58.725 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[42206], 99.95th=[43254], 00:39:58.725 | 99.99th=[44827] 00:39:58.725 bw ( KiB/s): min= 104, max=10368, per=22.20%, avg=6985.60, stdev=4519.96, samples=5 00:39:58.725 iops : min= 26, max= 2592, avg=1746.40, stdev=1129.99, samples=5 00:39:58.725 lat (usec) : 500=97.13%, 750=2.13% 00:39:58.725 lat (msec) : 2=0.04%, 50=0.67% 00:39:58.725 cpu : usr=0.88%, sys=2.27%, ctx=4499, majf=0, minf=2 00:39:58.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.725 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.725 issued rwts: total=4498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:58.725 00:39:58.725 Run status group 0 (all jobs): 00:39:58.725 READ: bw=30.7MiB/s (32.2MB/s), 6098KiB/s-11.5MiB/s (6244kB/s-12.1MB/s), io=118MiB (124MB), run=2950-3850msec 00:39:58.725 00:39:58.725 Disk stats (read/write): 00:39:58.725 nvme0n1: ios=9929/0, merge=0/0, ticks=4121/0, in_queue=4121, util=98.00% 00:39:58.725 nvme0n2: ios=7833/0, merge=0/0, ticks=3354/0, in_queue=3354, util=95.07% 00:39:58.725 nvme0n3: ios=6584/0, merge=0/0, ticks=2952/0, in_queue=2952, util=96.82% 00:39:58.725 nvme0n4: ios=4495/0, merge=0/0, ticks=2742/0, in_queue=2742, util=96.71% 00:39:59.005 01:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:59.005 01:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:59.262 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:59.262 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:59.520 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:59.520 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:59.777 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:59.777 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:00.035 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:00.035 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1110265 00:40:00.035 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:00.035 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:00.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:00.293 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:00.293 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:40:00.293 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:00.293 01:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:00.293 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:00.293 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:00.293 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:40:00.293 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:00.293 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:00.293 nvmf hotplug test: fio failed as expected 00:40:00.293 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.551 rmmod nvme_tcp 00:40:00.551 rmmod nvme_fabrics 00:40:00.551 rmmod nvme_keyring 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 1108372 ']' 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 1108372 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1108372 ']' 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1108372 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:00.551 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1108372 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1108372' 00:40:00.810 killing process with pid 1108372 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1108372 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1108372 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.810 01:57:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.416 00:40:03.416 real 0m23.812s 00:40:03.416 user 1m6.613s 00:40:03.416 sys 0m10.969s 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:03.416 ************************************ 00:40:03.416 END TEST nvmf_fio_target 00:40:03.416 ************************************ 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:03.416 ************************************ 00:40:03.416 START TEST nvmf_bdevio 00:40:03.416 ************************************ 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:03.416 * Looking for test storage... 00:40:03.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:03.416 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:03.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.416 --rc genhtml_branch_coverage=1 00:40:03.417 --rc genhtml_function_coverage=1 00:40:03.417 --rc genhtml_legend=1 00:40:03.417 --rc geninfo_all_blocks=1 00:40:03.417 --rc geninfo_unexecuted_blocks=1 00:40:03.417 00:40:03.417 ' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.417 --rc genhtml_branch_coverage=1 00:40:03.417 --rc genhtml_function_coverage=1 00:40:03.417 --rc genhtml_legend=1 00:40:03.417 --rc geninfo_all_blocks=1 00:40:03.417 --rc geninfo_unexecuted_blocks=1 00:40:03.417 00:40:03.417 ' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.417 --rc genhtml_branch_coverage=1 00:40:03.417 --rc genhtml_function_coverage=1 00:40:03.417 --rc genhtml_legend=1 00:40:03.417 --rc geninfo_all_blocks=1 00:40:03.417 --rc geninfo_unexecuted_blocks=1 00:40:03.417 00:40:03.417 ' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:03.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.417 --rc genhtml_branch_coverage=1 00:40:03.417 --rc genhtml_function_coverage=1 00:40:03.417 --rc genhtml_legend=1 00:40:03.417 --rc geninfo_all_blocks=1 00:40:03.417 --rc geninfo_unexecuted_blocks=1 00:40:03.417 00:40:03.417 ' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:03.417 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:03.418 01:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:05.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:05.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:05.324 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:05.325 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:05.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:05.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:40:05.325 00:40:05.325 --- 10.0.0.2 ping statistics --- 00:40:05.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.325 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:05.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:40:05.325 00:40:05.325 --- 10.0.0.1 ping statistics --- 00:40:05.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.325 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:05.325 01:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=1113097 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 1113097 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1113097 ']' 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:05.325 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.325 [2024-10-01 01:57:45.058500] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:05.325 [2024-10-01 01:57:45.059560] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:05.325 [2024-10-01 01:57:45.059627] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.325 [2024-10-01 01:57:45.131315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:05.584 [2024-10-01 01:57:45.226552] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.584 [2024-10-01 01:57:45.226609] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.584 [2024-10-01 01:57:45.226636] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.584 [2024-10-01 01:57:45.226649] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.584 [2024-10-01 01:57:45.226661] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.584 [2024-10-01 01:57:45.226762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:05.584 [2024-10-01 01:57:45.226818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:05.584 [2024-10-01 01:57:45.226869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:05.584 [2024-10-01 01:57:45.226871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:05.584 [2024-10-01 01:57:45.328277] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:05.584 [2024-10-01 01:57:45.328532] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:05.584 [2024-10-01 01:57:45.328798] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:05.584 [2024-10-01 01:57:45.329379] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:05.584 [2024-10-01 01:57:45.329654] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.584 [2024-10-01 01:57:45.383632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.584 Malloc0 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.584 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:05.844 [2024-10-01 01:57:45.439804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.844 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.844 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:05.844 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:05.844 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:05.845 { 00:40:05.845 "params": { 00:40:05.845 "name": "Nvme$subsystem", 00:40:05.845 "trtype": "$TEST_TRANSPORT", 00:40:05.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:05.845 "adrfam": "ipv4", 00:40:05.845 "trsvcid": "$NVMF_PORT", 00:40:05.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:05.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:05.845 "hdgst": ${hdgst:-false}, 00:40:05.845 "ddgst": ${ddgst:-false} 00:40:05.845 }, 00:40:05.845 "method": "bdev_nvme_attach_controller" 00:40:05.845 } 00:40:05.845 EOF 00:40:05.845 )") 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:40:05.845 01:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:05.845 "params": { 00:40:05.845 "name": "Nvme1", 00:40:05.845 "trtype": "tcp", 00:40:05.845 "traddr": "10.0.0.2", 00:40:05.845 "adrfam": "ipv4", 00:40:05.845 "trsvcid": "4420", 00:40:05.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:05.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:05.845 "hdgst": false, 00:40:05.845 "ddgst": false 00:40:05.845 }, 00:40:05.845 "method": "bdev_nvme_attach_controller" 00:40:05.845 }' 00:40:05.845 [2024-10-01 01:57:45.486757] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:05.845 [2024-10-01 01:57:45.486834] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113130 ] 00:40:05.845 [2024-10-01 01:57:45.549560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:05.845 [2024-10-01 01:57:45.638271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.845 [2024-10-01 01:57:45.638321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:05.845 [2024-10-01 01:57:45.638324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.104 I/O targets: 00:40:06.104 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:06.104 00:40:06.104 00:40:06.104 CUnit - A unit testing framework for C - Version 2.1-3 00:40:06.104 http://cunit.sourceforge.net/ 00:40:06.104 00:40:06.104 00:40:06.104 Suite: bdevio tests on: Nvme1n1 00:40:06.104 Test: blockdev write read block ...passed 00:40:06.104 Test: blockdev write zeroes read block ...passed 00:40:06.104 Test: blockdev write zeroes read no split ...passed 00:40:06.104 Test: blockdev write zeroes read split ...passed 00:40:06.364 Test: blockdev write zeroes read split partial ...passed 00:40:06.364 Test: blockdev reset ...[2024-10-01 01:57:46.008966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:06.365 [2024-10-01 01:57:46.009096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dee90 (9): Bad file descriptor 00:40:06.365 [2024-10-01 01:57:46.102200] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:06.365 passed 00:40:06.365 Test: blockdev write read 8 blocks ...passed 00:40:06.365 Test: blockdev write read size > 128k ...passed 00:40:06.365 Test: blockdev write read invalid size ...passed 00:40:06.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:06.365 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:06.365 Test: blockdev write read max offset ...passed 00:40:06.625 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:06.625 Test: blockdev writev readv 8 blocks ...passed 00:40:06.625 Test: blockdev writev readv 30 x 1block ...passed 00:40:06.625 Test: blockdev writev readv block ...passed 00:40:06.625 Test: blockdev writev readv size > 128k ...passed 00:40:06.625 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:06.625 Test: blockdev comparev and writev ...[2024-10-01 01:57:46.318036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.318091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.318124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.318143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.318585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.318609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.318632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.319100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.319126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.319149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.319165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.319607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.319632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:06.625 [2024-10-01 01:57:46.319654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:06.625 [2024-10-01 01:57:46.319670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:06.625 passed 00:40:06.625 Test: blockdev nvme passthru rw ...passed 00:40:06.625 Test: blockdev nvme passthru vendor specific ...[2024-10-01 01:57:46.402350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:06.626 [2024-10-01 01:57:46.402378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:06.626 [2024-10-01 01:57:46.402561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:06.626 [2024-10-01 01:57:46.402584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:06.626 [2024-10-01 01:57:46.402759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:06.626 [2024-10-01 01:57:46.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:06.626 [2024-10-01 01:57:46.402958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:06.626 [2024-10-01 01:57:46.402981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:06.626 passed 00:40:06.626 Test: blockdev nvme admin passthru ...passed 00:40:06.626 Test: blockdev copy ...passed 00:40:06.626 00:40:06.626 Run Summary: Type Total Ran Passed Failed Inactive 00:40:06.626 suites 1 1 n/a 0 0 00:40:06.626 tests 23 23 23 0 0 00:40:06.626 asserts 152 152 152 0 n/a 00:40:06.626 00:40:06.626 Elapsed time = 1.204 seconds 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.885 rmmod nvme_tcp 00:40:06.885 rmmod nvme_fabrics 00:40:06.885 rmmod nvme_keyring 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 1113097 ']' 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 1113097 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1113097 ']' 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1113097 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:06.885 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1113097 00:40:07.144 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:40:07.144 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:40:07.144 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1113097' 00:40:07.144 killing process with pid 1113097 00:40:07.144 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1113097 00:40:07.144 01:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1113097 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:07.403 01:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.316 01:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:09.316 00:40:09.316 real 0m6.315s 00:40:09.316 user 0m8.651s 00:40:09.316 sys 0m2.415s 00:40:09.316 01:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:09.316 01:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:09.316 ************************************ 00:40:09.316 END TEST nvmf_bdevio 00:40:09.316 ************************************ 00:40:09.316 01:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:09.316 00:40:09.316 real 3m54.481s 00:40:09.316 user 8m50.793s 00:40:09.316 sys 1m26.538s 00:40:09.316 01:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:09.316 01:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:09.316 ************************************ 00:40:09.316 END TEST nvmf_target_core_interrupt_mode 00:40:09.316 ************************************ 00:40:09.316 01:57:49 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:09.316 01:57:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:09.316 01:57:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:09.316 01:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.316 ************************************ 00:40:09.316 START TEST nvmf_interrupt 00:40:09.316 ************************************ 00:40:09.316 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:09.577 * Looking for test storage... 00:40:09.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:09.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.577 --rc genhtml_branch_coverage=1 00:40:09.577 --rc genhtml_function_coverage=1 00:40:09.577 --rc genhtml_legend=1 00:40:09.577 --rc geninfo_all_blocks=1 00:40:09.577 --rc geninfo_unexecuted_blocks=1 00:40:09.577 00:40:09.577 ' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:09.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.577 --rc genhtml_branch_coverage=1 00:40:09.577 --rc genhtml_function_coverage=1 00:40:09.577 --rc genhtml_legend=1 00:40:09.577 --rc geninfo_all_blocks=1 00:40:09.577 --rc geninfo_unexecuted_blocks=1 00:40:09.577 00:40:09.577 ' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:09.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.577 --rc genhtml_branch_coverage=1 00:40:09.577 --rc genhtml_function_coverage=1 00:40:09.577 --rc genhtml_legend=1 00:40:09.577 --rc geninfo_all_blocks=1 00:40:09.577 --rc geninfo_unexecuted_blocks=1 00:40:09.577 00:40:09.577 ' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:09.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.577 --rc genhtml_branch_coverage=1 00:40:09.577 --rc genhtml_function_coverage=1 00:40:09.577 --rc genhtml_legend=1 00:40:09.577 --rc geninfo_all_blocks=1 00:40:09.577 --rc geninfo_unexecuted_blocks=1 00:40:09.577 00:40:09.577 ' 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.577 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:09.578 01:57:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:11.482 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:11.482 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:11.482 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:11.483 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:11.483 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:11.483 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:11.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:11.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:40:11.742 00:40:11.742 --- 10.0.0.2 ping statistics --- 00:40:11.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.742 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:11.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:11.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:40:11.742 00:40:11.742 --- 10.0.0.1 ping statistics --- 00:40:11.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.742 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=1115217 00:40:11.742 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 1115217 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1115217 ']' 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:11.743 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.743 [2024-10-01 01:57:51.573323] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:11.743 [2024-10-01 01:57:51.574485] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:11.743 [2024-10-01 01:57:51.574539] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:12.001 [2024-10-01 01:57:51.647693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:12.001 [2024-10-01 01:57:51.740218] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:12.001 [2024-10-01 01:57:51.740277] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:12.001 [2024-10-01 01:57:51.740290] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:12.001 [2024-10-01 01:57:51.740302] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:12.001 [2024-10-01 01:57:51.740311] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:12.001 [2024-10-01 01:57:51.743022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.001 [2024-10-01 01:57:51.743035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.001 [2024-10-01 01:57:51.835128] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:12.001 [2024-10-01 01:57:51.835153] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:12.001 [2024-10-01 01:57:51.835409] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:12.260 5000+0 records in 00:40:12.260 5000+0 records out 00:40:12.260 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0153894 s, 665 MB/s 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.260 AIO0 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.260 [2024-10-01 01:57:51.955651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:12.260 [2024-10-01 01:57:51.987893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1115217 0 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1115217 0 idle 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:12.260 01:57:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115217 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.31 reactor_0' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115217 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.31 reactor_0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1115217 1 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1115217 1 idle 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115270 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115270 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1115380 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1115217 0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1115217 0 busy 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:12.520 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115217 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.31 reactor_0' 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115217 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.31 reactor_0 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:12.779 01:57:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:40:13.714 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:40:13.714 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:13.714 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:13.714 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115217 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:02.40 reactor_0' 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115217 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:02.40 reactor_0 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1115217 1 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1115217 1 busy 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:13.972 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115270 root 20 0 128.2g 47616 34176 R 86.7 0.1 0:01.09 reactor_1' 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115270 root 20 0 128.2g 47616 34176 R 86.7 0.1 0:01.09 reactor_1 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:14.231 01:57:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1115380 00:40:24.194 Initializing NVMe Controllers 00:40:24.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:24.194 Controller IO queue size 256, less than required. 00:40:24.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:24.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:24.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:24.194 Initialization complete. Launching workers. 00:40:24.194 ======================================================== 00:40:24.194 Latency(us) 00:40:24.194 Device Information : IOPS MiB/s Average min max 00:40:24.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10873.90 42.48 23566.87 4700.62 62435.62 00:40:24.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13791.80 53.87 18574.12 4663.48 21383.80 00:40:24.194 ======================================================== 00:40:24.194 Total : 24665.70 96.35 20775.18 4663.48 62435.62 00:40:24.194 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1115217 0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1115217 0 idle 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115217 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:18.51 reactor_0' 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115217 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:18.51 reactor_0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1115217 1 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1115217 1 idle 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115270 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:08.23 reactor_1' 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115270 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:08.23 reactor_1 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:24.194 01:58:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:24.194 01:58:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:24.194 01:58:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:24.194 01:58:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:24.194 01:58:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:24.194 01:58:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:25.571 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1115217 0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1115217 0 idle 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115217 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:18.61 reactor_0' 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115217 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:18.61 reactor_0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1115217 1 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1115217 1 idle 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1115217 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1115217 -w 256 00:40:25.572 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1115270 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:08.27 reactor_1' 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1115270 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:08.27 reactor_1 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:25.830 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:26.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:26.088 rmmod nvme_tcp 00:40:26.088 rmmod nvme_fabrics 00:40:26.088 rmmod nvme_keyring 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 1115217 ']' 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 1115217 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1115217 ']' 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1115217 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1115217 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1115217' 00:40:26.088 killing process with pid 1115217 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1115217 00:40:26.088 01:58:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1115217 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:26.346 01:58:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:28.883 01:58:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:28.883 00:40:28.883 real 0m19.018s 00:40:28.883 user 0m34.435s 00:40:28.883 sys 0m7.786s 00:40:28.883 01:58:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:28.883 01:58:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:28.883 ************************************ 00:40:28.883 END TEST nvmf_interrupt 00:40:28.883 ************************************ 00:40:28.883 00:40:28.883 real 32m57.206s 00:40:28.883 user 87m1.420s 00:40:28.883 sys 8m10.942s 00:40:28.883 01:58:08 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:28.883 01:58:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.883 ************************************ 00:40:28.883 END TEST nvmf_tcp 00:40:28.883 ************************************ 00:40:28.883 01:58:08 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:28.883 01:58:08 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:28.883 01:58:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:28.883 01:58:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:28.883 01:58:08 -- common/autotest_common.sh@10 -- # set +x 00:40:28.883 ************************************ 00:40:28.883 START TEST spdkcli_nvmf_tcp 00:40:28.883 ************************************ 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:28.883 * Looking for test storage... 00:40:28.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.883 --rc genhtml_branch_coverage=1 00:40:28.883 --rc genhtml_function_coverage=1 00:40:28.883 --rc genhtml_legend=1 00:40:28.883 --rc geninfo_all_blocks=1 00:40:28.883 --rc geninfo_unexecuted_blocks=1 00:40:28.883 00:40:28.883 ' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.883 --rc genhtml_branch_coverage=1 00:40:28.883 --rc genhtml_function_coverage=1 00:40:28.883 --rc genhtml_legend=1 00:40:28.883 --rc geninfo_all_blocks=1 00:40:28.883 --rc geninfo_unexecuted_blocks=1 00:40:28.883 00:40:28.883 ' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.883 --rc genhtml_branch_coverage=1 00:40:28.883 --rc genhtml_function_coverage=1 00:40:28.883 --rc genhtml_legend=1 00:40:28.883 --rc geninfo_all_blocks=1 00:40:28.883 --rc geninfo_unexecuted_blocks=1 00:40:28.883 00:40:28.883 ' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:28.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.883 --rc genhtml_branch_coverage=1 00:40:28.883 --rc genhtml_function_coverage=1 00:40:28.883 --rc genhtml_legend=1 00:40:28.883 --rc geninfo_all_blocks=1 00:40:28.883 --rc geninfo_unexecuted_blocks=1 00:40:28.883 00:40:28.883 ' 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:28.883 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:28.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1117385 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1117385 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1117385 ']' 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.884 [2024-10-01 01:58:08.405355] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:28.884 [2024-10-01 01:58:08.405452] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117385 ] 00:40:28.884 [2024-10-01 01:58:08.465129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:28.884 [2024-10-01 01:58:08.552994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.884 [2024-10-01 01:58:08.553005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:28.884 01:58:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:28.884 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:28.884 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:28.884 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:28.884 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:28.884 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:28.884 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:28.884 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:28.884 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:28.884 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:28.884 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:28.884 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:28.884 ' 00:40:32.166 [2024-10-01 01:58:11.337188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.100 [2024-10-01 01:58:12.621672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:35.627 [2024-10-01 01:58:15.001010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:37.523 [2024-10-01 01:58:17.047393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:38.896 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:38.896 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:38.896 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:38.896 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:38.896 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:38.896 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:38.896 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:38.896 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:38.896 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:38.896 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:38.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:38.896 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:38.896 01:58:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:39.470 01:58:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:39.470 01:58:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:39.470 01:58:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:39.470 01:58:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:39.471 01:58:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:39.471 01:58:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:39.471 01:58:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:39.471 01:58:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:39.471 01:58:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:39.471 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:39.471 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:39.471 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:39.471 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:39.471 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:39.471 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:39.471 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:39.471 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:39.471 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:39.471 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:39.471 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:39.471 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:39.471 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:39.471 ' 00:40:46.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:46.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:46.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:46.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:46.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:46.096 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:46.096 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:46.096 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:46.096 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:46.096 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:46.096 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:46.096 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:46.096 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:46.096 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1117385 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1117385 ']' 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1117385 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1117385 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1117385' 00:40:46.096 killing process with pid 1117385 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1117385 00:40:46.096 01:58:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1117385 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1117385 ']' 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1117385 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1117385 ']' 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1117385 00:40:46.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1117385) - No such process 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1117385 is not found' 00:40:46.096 Process with pid 1117385 is not found 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:46.096 00:40:46.096 real 0m16.838s 00:40:46.096 user 0m36.089s 00:40:46.096 sys 0m0.816s 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:46.096 01:58:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.096 ************************************ 00:40:46.096 END TEST spdkcli_nvmf_tcp 00:40:46.096 ************************************ 00:40:46.096 01:58:25 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:46.096 01:58:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:46.096 01:58:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:46.096 01:58:25 -- common/autotest_common.sh@10 -- # set +x 00:40:46.096 ************************************ 00:40:46.096 START TEST nvmf_identify_passthru 00:40:46.096 ************************************ 00:40:46.096 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:46.096 * Looking for test storage... 00:40:46.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:46.096 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:46.096 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:40:46.096 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:46.096 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.096 01:58:25 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:46.097 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.097 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:46.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.097 --rc genhtml_branch_coverage=1 00:40:46.097 --rc genhtml_function_coverage=1 00:40:46.097 --rc genhtml_legend=1 00:40:46.097 --rc geninfo_all_blocks=1 00:40:46.097 --rc geninfo_unexecuted_blocks=1 00:40:46.097 00:40:46.097 ' 00:40:46.097 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:46.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.097 --rc genhtml_branch_coverage=1 00:40:46.097 --rc genhtml_function_coverage=1 00:40:46.097 --rc genhtml_legend=1 00:40:46.097 --rc geninfo_all_blocks=1 00:40:46.097 --rc geninfo_unexecuted_blocks=1 00:40:46.097 00:40:46.097 ' 00:40:46.097 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:46.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.097 --rc genhtml_branch_coverage=1 00:40:46.097 --rc genhtml_function_coverage=1 00:40:46.097 --rc genhtml_legend=1 00:40:46.097 --rc geninfo_all_blocks=1 00:40:46.097 --rc geninfo_unexecuted_blocks=1 00:40:46.097 00:40:46.097 ' 00:40:46.097 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:46.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.097 --rc genhtml_branch_coverage=1 00:40:46.097 --rc genhtml_function_coverage=1 00:40:46.097 --rc genhtml_legend=1 00:40:46.097 --rc geninfo_all_blocks=1 00:40:46.097 --rc geninfo_unexecuted_blocks=1 00:40:46.097 00:40:46.097 ' 00:40:46.097 01:58:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:46.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:46.097 01:58:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.097 01:58:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:46.097 01:58:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.097 01:58:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:46.097 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.098 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:46.098 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.098 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:46.098 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:46.098 01:58:25 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:46.098 01:58:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:47.472 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:47.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:47.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:47.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:47.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:47.473 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:47.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:47.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:40:47.733 00:40:47.733 --- 10.0.0.2 ping statistics --- 00:40:47.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.733 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:47.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:47.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:40:47.733 00:40:47.733 --- 10.0.0.1 ping statistics --- 00:40:47.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.733 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:47.733 01:58:27 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:47.733 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:47.733 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:40:47.733 01:58:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:40:47.733 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:47.733 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:47.733 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:47.733 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:47.992 01:58:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:52.180 01:58:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:52.180 01:58:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:52.180 01:58:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:52.180 01:58:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:56.366 01:58:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:56.366 01:58:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:56.366 01:58:35 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:56.366 01:58:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.366 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.366 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1122023 00:40:56.366 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:56.366 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:56.366 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1122023 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1122023 ']' 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:56.366 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.366 [2024-10-01 01:58:36.078922] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:40:56.366 [2024-10-01 01:58:36.079022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:56.366 [2024-10-01 01:58:36.151649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:56.623 [2024-10-01 01:58:36.245876] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:56.623 [2024-10-01 01:58:36.245937] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:56.623 [2024-10-01 01:58:36.245966] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:56.623 [2024-10-01 01:58:36.245978] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:56.623 [2024-10-01 01:58:36.245988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:56.623 [2024-10-01 01:58:36.246052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:56.623 [2024-10-01 01:58:36.246082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:56.623 [2024-10-01 01:58:36.246141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:56.623 [2024-10-01 01:58:36.246144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.623 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:56.623 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:56.623 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:56.623 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.623 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.623 INFO: Log level set to 20 00:40:56.623 INFO: Requests: 00:40:56.623 { 00:40:56.623 "jsonrpc": "2.0", 00:40:56.623 "method": "nvmf_set_config", 00:40:56.623 "id": 1, 00:40:56.623 "params": { 00:40:56.623 "admin_cmd_passthru": { 00:40:56.623 "identify_ctrlr": true 00:40:56.623 } 00:40:56.623 } 00:40:56.623 } 00:40:56.623 00:40:56.623 INFO: response: 00:40:56.623 { 00:40:56.623 "jsonrpc": "2.0", 00:40:56.623 "id": 1, 00:40:56.623 "result": true 00:40:56.623 } 00:40:56.623 00:40:56.623 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.623 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:56.623 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.624 INFO: Setting log level to 20 00:40:56.624 INFO: Setting log level to 20 00:40:56.624 INFO: Log level set to 20 00:40:56.624 INFO: Log level set to 20 00:40:56.624 INFO: Requests: 00:40:56.624 { 00:40:56.624 "jsonrpc": "2.0", 00:40:56.624 "method": "framework_start_init", 00:40:56.624 "id": 1 00:40:56.624 } 00:40:56.624 00:40:56.624 INFO: Requests: 00:40:56.624 { 00:40:56.624 "jsonrpc": "2.0", 00:40:56.624 "method": "framework_start_init", 00:40:56.624 "id": 1 00:40:56.624 } 00:40:56.624 00:40:56.624 [2024-10-01 01:58:36.429494] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:56.624 INFO: response: 00:40:56.624 { 00:40:56.624 "jsonrpc": "2.0", 00:40:56.624 "id": 1, 00:40:56.624 "result": true 00:40:56.624 } 00:40:56.624 00:40:56.624 INFO: response: 00:40:56.624 { 00:40:56.624 "jsonrpc": "2.0", 00:40:56.624 "id": 1, 00:40:56.624 "result": true 00:40:56.624 } 00:40:56.624 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.624 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.624 INFO: Setting log level to 40 00:40:56.624 INFO: Setting log level to 40 00:40:56.624 INFO: Setting log level to 40 00:40:56.624 [2024-10-01 01:58:36.439638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.624 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.624 01:58:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.624 01:58:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.903 Nvme0n1 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.903 [2024-10-01 01:58:39.340146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.903 [ 00:40:59.903 { 00:40:59.903 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:59.903 "subtype": "Discovery", 00:40:59.903 "listen_addresses": [], 00:40:59.903 "allow_any_host": true, 00:40:59.903 "hosts": [] 00:40:59.903 }, 00:40:59.903 { 00:40:59.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:59.903 "subtype": "NVMe", 00:40:59.903 "listen_addresses": [ 00:40:59.903 { 00:40:59.903 "trtype": "TCP", 00:40:59.903 "adrfam": "IPv4", 00:40:59.903 "traddr": "10.0.0.2", 00:40:59.903 "trsvcid": "4420" 00:40:59.903 } 00:40:59.903 ], 00:40:59.903 "allow_any_host": true, 00:40:59.903 "hosts": [], 00:40:59.903 "serial_number": "SPDK00000000000001", 00:40:59.903 "model_number": "SPDK bdev Controller", 00:40:59.903 "max_namespaces": 1, 00:40:59.903 "min_cntlid": 1, 00:40:59.903 "max_cntlid": 65519, 00:40:59.903 "namespaces": [ 00:40:59.903 { 00:40:59.903 "nsid": 1, 00:40:59.903 "bdev_name": "Nvme0n1", 00:40:59.903 "name": "Nvme0n1", 00:40:59.903 "nguid": "D5BA14B47DB6447D8EA7F80B2AFFB463", 00:40:59.903 "uuid": "d5ba14b4-7db6-447d-8ea7-f80b2affb463" 00:40:59.903 } 00:40:59.903 ] 00:40:59.903 } 00:40:59.903 ] 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.903 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:59.903 01:58:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:59.903 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:59.903 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:00.162 rmmod nvme_tcp 00:41:00.162 rmmod nvme_fabrics 00:41:00.162 rmmod nvme_keyring 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 1122023 ']' 00:41:00.162 01:58:39 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 1122023 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1122023 ']' 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1122023 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1122023 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1122023' 00:41:00.162 killing process with pid 1122023 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1122023 00:41:00.162 01:58:39 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1122023 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.062 01:58:41 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.062 01:58:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:02.062 01:58:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.969 01:58:43 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.969 00:41:03.969 real 0m18.357s 00:41:03.969 user 0m27.095s 00:41:03.969 sys 0m2.496s 00:41:03.969 01:58:43 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:03.969 01:58:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:03.969 ************************************ 00:41:03.969 END TEST nvmf_identify_passthru 00:41:03.969 ************************************ 00:41:03.969 01:58:43 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:03.969 01:58:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:03.969 01:58:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:03.969 01:58:43 -- common/autotest_common.sh@10 -- # set +x 00:41:03.969 ************************************ 00:41:03.969 START TEST nvmf_dif 00:41:03.969 ************************************ 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:03.969 * Looking for test storage... 00:41:03.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:03.969 01:58:43 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.969 --rc genhtml_branch_coverage=1 00:41:03.969 --rc genhtml_function_coverage=1 00:41:03.969 --rc genhtml_legend=1 00:41:03.969 --rc geninfo_all_blocks=1 00:41:03.969 --rc geninfo_unexecuted_blocks=1 00:41:03.969 00:41:03.969 ' 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.969 --rc genhtml_branch_coverage=1 00:41:03.969 --rc genhtml_function_coverage=1 00:41:03.969 --rc genhtml_legend=1 00:41:03.969 --rc geninfo_all_blocks=1 00:41:03.969 --rc geninfo_unexecuted_blocks=1 00:41:03.969 00:41:03.969 ' 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.969 --rc genhtml_branch_coverage=1 00:41:03.969 --rc genhtml_function_coverage=1 00:41:03.969 --rc genhtml_legend=1 00:41:03.969 --rc geninfo_all_blocks=1 00:41:03.969 --rc geninfo_unexecuted_blocks=1 00:41:03.969 00:41:03.969 ' 00:41:03.969 01:58:43 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.969 --rc genhtml_branch_coverage=1 00:41:03.969 --rc genhtml_function_coverage=1 00:41:03.969 --rc genhtml_legend=1 00:41:03.969 --rc geninfo_all_blocks=1 00:41:03.969 --rc geninfo_unexecuted_blocks=1 00:41:03.969 00:41:03.969 ' 00:41:03.969 01:58:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:03.969 01:58:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:03.970 01:58:43 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:03.970 01:58:43 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:03.970 01:58:43 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:03.970 01:58:43 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:03.970 01:58:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.970 01:58:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.970 01:58:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.970 01:58:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:03.970 01:58:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:03.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:03.970 01:58:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:03.970 01:58:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:03.970 01:58:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:03.970 01:58:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:03.970 01:58:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:03.970 01:58:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:03.970 01:58:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:03.970 01:58:43 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:03.970 01:58:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:06.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:06.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.503 01:58:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:06.504 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:06.504 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:06.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:41:06.504 00:41:06.504 --- 10.0.0.2 ping statistics --- 00:41:06.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.504 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:06.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:41:06.504 00:41:06.504 --- 10.0.0.1 ping statistics --- 00:41:06.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.504 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:41:06.504 01:58:45 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:07.438 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:07.438 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:07.438 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:07.438 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:07.438 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:07.438 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:07.438 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:07.438 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:07.438 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:07.438 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:07.438 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:07.438 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:07.438 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:07.438 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:07.438 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:07.438 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:07.438 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:07.438 01:58:47 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:07.438 01:58:47 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:07.438 01:58:47 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:07.438 01:58:47 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:07.438 01:58:47 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:07.438 01:58:47 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:07.439 01:58:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:07.439 01:58:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:07.439 01:58:47 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.439 01:58:47 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=1125290 00:41:07.439 01:58:47 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:07.439 01:58:47 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 1125290 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1125290 ']' 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:07.439 01:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.439 [2024-10-01 01:58:47.231844] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:41:07.439 [2024-10-01 01:58:47.231947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.697 [2024-10-01 01:58:47.310484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.697 [2024-10-01 01:58:47.409422] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:07.697 [2024-10-01 01:58:47.409499] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:07.697 [2024-10-01 01:58:47.409517] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:07.697 [2024-10-01 01:58:47.409531] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:07.697 [2024-10-01 01:58:47.409544] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:07.697 [2024-10-01 01:58:47.409577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.697 01:58:47 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:07.697 01:58:47 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:41:07.697 01:58:47 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:07.697 01:58:47 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:07.697 01:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 01:58:47 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.955 01:58:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:07.955 01:58:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:07.955 01:58:47 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.955 01:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 [2024-10-01 01:58:47.558393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.955 01:58:47 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.955 01:58:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:07.955 01:58:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:07.955 01:58:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:07.955 01:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 ************************************ 00:41:07.955 START TEST fio_dif_1_default 00:41:07.955 ************************************ 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 bdev_null0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.955 [2024-10-01 01:58:47.618721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:07.955 { 00:41:07.955 "params": { 00:41:07.955 "name": "Nvme$subsystem", 00:41:07.955 "trtype": "$TEST_TRANSPORT", 00:41:07.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.955 "adrfam": "ipv4", 00:41:07.955 "trsvcid": "$NVMF_PORT", 00:41:07.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.955 "hdgst": ${hdgst:-false}, 00:41:07.955 "ddgst": ${ddgst:-false} 00:41:07.955 }, 00:41:07.955 "method": "bdev_nvme_attach_controller" 00:41:07.955 } 00:41:07.955 EOF 00:41:07.955 )") 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:07.955 "params": { 00:41:07.955 "name": "Nvme0", 00:41:07.955 "trtype": "tcp", 00:41:07.955 "traddr": "10.0.0.2", 00:41:07.955 "adrfam": "ipv4", 00:41:07.955 "trsvcid": "4420", 00:41:07.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:07.955 "hdgst": false, 00:41:07.955 "ddgst": false 00:41:07.955 }, 00:41:07.955 "method": "bdev_nvme_attach_controller" 00:41:07.955 }' 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:07.955 01:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.213 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:08.214 fio-3.35 00:41:08.214 Starting 1 thread 00:41:20.410 00:41:20.410 filename0: (groupid=0, jobs=1): err= 0: pid=1125518: Tue Oct 1 01:58:58 2024 00:41:20.410 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:41:20.410 slat (nsec): min=4635, max=82945, avg=9825.46, stdev=4045.40 00:41:20.410 clat (usec): min=621, max=48456, avg=21027.54, stdev=20288.78 00:41:20.410 lat (usec): min=629, max=48471, avg=21037.37, stdev=20288.72 00:41:20.410 clat percentiles (usec): 00:41:20.410 | 1.00th=[ 668], 5.00th=[ 685], 10.00th=[ 685], 20.00th=[ 701], 00:41:20.410 | 30.00th=[ 709], 40.00th=[ 717], 50.00th=[41157], 60.00th=[41157], 00:41:20.410 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:20.410 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:41:20.410 | 99.99th=[48497] 00:41:20.410 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:41:20.410 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:41:20.410 lat (usec) : 750=47.42%, 1000=2.47% 00:41:20.410 lat (msec) : 50=50.11% 00:41:20.410 cpu : usr=91.21%, sys=8.44%, ctx=35, majf=0, minf=405 00:41:20.410 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.410 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.410 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:20.410 00:41:20.410 Run status group 0 (all jobs): 00:41:20.411 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10003-10003msec 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 00:41:20.411 real 0m11.184s 00:41:20.411 user 0m10.384s 00:41:20.411 sys 0m1.123s 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 ************************************ 00:41:20.411 END TEST fio_dif_1_default 00:41:20.411 ************************************ 00:41:20.411 01:58:58 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:20.411 01:58:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:20.411 01:58:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 ************************************ 00:41:20.411 START TEST fio_dif_1_multi_subsystems 00:41:20.411 ************************************ 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 bdev_null0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 [2024-10-01 01:58:58.856152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 bdev_null1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:20.411 { 00:41:20.411 "params": { 00:41:20.411 "name": "Nvme$subsystem", 00:41:20.411 "trtype": "$TEST_TRANSPORT", 00:41:20.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.411 "adrfam": "ipv4", 00:41:20.411 "trsvcid": "$NVMF_PORT", 00:41:20.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.411 "hdgst": ${hdgst:-false}, 00:41:20.411 "ddgst": ${ddgst:-false} 00:41:20.411 }, 00:41:20.411 "method": "bdev_nvme_attach_controller" 00:41:20.411 } 00:41:20.411 EOF 00:41:20.411 )") 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:20.411 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:20.411 { 00:41:20.411 "params": { 00:41:20.411 "name": "Nvme$subsystem", 00:41:20.411 "trtype": "$TEST_TRANSPORT", 00:41:20.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.411 "adrfam": "ipv4", 00:41:20.411 "trsvcid": "$NVMF_PORT", 00:41:20.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.412 "hdgst": ${hdgst:-false}, 00:41:20.412 "ddgst": ${ddgst:-false} 00:41:20.412 }, 00:41:20.412 "method": "bdev_nvme_attach_controller" 00:41:20.412 } 00:41:20.412 EOF 00:41:20.412 )") 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:20.412 "params": { 00:41:20.412 "name": "Nvme0", 00:41:20.412 "trtype": "tcp", 00:41:20.412 "traddr": "10.0.0.2", 00:41:20.412 "adrfam": "ipv4", 00:41:20.412 "trsvcid": "4420", 00:41:20.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.412 "hdgst": false, 00:41:20.412 "ddgst": false 00:41:20.412 }, 00:41:20.412 "method": "bdev_nvme_attach_controller" 00:41:20.412 },{ 00:41:20.412 "params": { 00:41:20.412 "name": "Nvme1", 00:41:20.412 "trtype": "tcp", 00:41:20.412 "traddr": "10.0.0.2", 00:41:20.412 "adrfam": "ipv4", 00:41:20.412 "trsvcid": "4420", 00:41:20.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.412 "hdgst": false, 00:41:20.412 "ddgst": false 00:41:20.412 }, 00:41:20.412 "method": "bdev_nvme_attach_controller" 00:41:20.412 }' 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.412 01:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.412 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.412 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.412 fio-3.35 00:41:20.412 Starting 2 threads 00:41:30.383 00:41:30.383 filename0: (groupid=0, jobs=1): err= 0: pid=1126917: Tue Oct 1 01:59:10 2024 00:41:30.383 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10005msec) 00:41:30.383 slat (nsec): min=7067, max=54278, avg=9651.91, stdev=3737.66 00:41:30.383 clat (usec): min=40802, max=42792, avg=41311.16, stdev=495.23 00:41:30.383 lat (usec): min=40809, max=42846, avg=41320.81, stdev=495.77 00:41:30.383 clat percentiles (usec): 00:41:30.383 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:30.383 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:30.383 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:30.383 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:41:30.383 | 99.99th=[42730] 00:41:30.383 bw ( KiB/s): min= 384, max= 416, per=39.99%, avg=385.60, stdev= 7.16, samples=20 00:41:30.383 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:41:30.383 lat (msec) : 50=100.00% 00:41:30.383 cpu : usr=94.91%, sys=4.80%, ctx=16, majf=0, minf=144 00:41:30.383 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.383 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.383 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.383 filename1: (groupid=0, jobs=1): err= 0: pid=1126918: Tue Oct 1 01:59:10 2024 00:41:30.383 read: IOPS=143, BW=576KiB/s (590kB/s)(5760KiB/10004msec) 00:41:30.383 slat (nsec): min=7163, max=78085, avg=9914.62, stdev=4270.87 00:41:30.383 clat (usec): min=659, max=42570, avg=27757.58, stdev=18987.49 00:41:30.383 lat (usec): min=666, max=42601, avg=27767.50, stdev=18987.47 00:41:30.383 clat percentiles (usec): 00:41:30.383 | 1.00th=[ 676], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 725], 00:41:30.383 | 30.00th=[ 898], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:30.383 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:30.383 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:41:30.383 | 99.99th=[42730] 00:41:30.383 bw ( KiB/s): min= 384, max= 768, per=59.62%, avg=574.40, stdev=183.53, samples=20 00:41:30.383 iops : min= 96, max= 192, avg=143.60, stdev=45.88, samples=20 00:41:30.383 lat (usec) : 750=25.69%, 1000=6.81% 00:41:30.383 lat (msec) : 2=0.56%, 50=66.94% 00:41:30.383 cpu : usr=95.00%, sys=4.70%, ctx=17, majf=0, minf=178 00:41:30.383 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.383 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.383 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.383 00:41:30.383 Run status group 0 (all jobs): 00:41:30.383 READ: bw=963KiB/s (986kB/s), 387KiB/s-576KiB/s (396kB/s-590kB/s), io=9632KiB (9863kB), run=10004-10005msec 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 00:41:30.642 real 0m11.456s 00:41:30.642 user 0m20.389s 00:41:30.642 sys 0m1.276s 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 ************************************ 00:41:30.642 END TEST fio_dif_1_multi_subsystems 00:41:30.642 ************************************ 00:41:30.642 01:59:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:30.642 01:59:10 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:30.642 01:59:10 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 ************************************ 00:41:30.642 START TEST fio_dif_rand_params 00:41:30.642 ************************************ 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 bdev_null0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.642 [2024-10-01 01:59:10.357995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:30.642 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:30.642 { 00:41:30.642 "params": { 00:41:30.642 "name": "Nvme$subsystem", 00:41:30.642 "trtype": "$TEST_TRANSPORT", 00:41:30.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.642 "adrfam": "ipv4", 00:41:30.642 "trsvcid": "$NVMF_PORT", 00:41:30.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.643 "hdgst": ${hdgst:-false}, 00:41:30.643 "ddgst": ${ddgst:-false} 00:41:30.643 }, 00:41:30.643 "method": "bdev_nvme_attach_controller" 00:41:30.643 } 00:41:30.643 EOF 00:41:30.643 )") 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:30.643 "params": { 00:41:30.643 "name": "Nvme0", 00:41:30.643 "trtype": "tcp", 00:41:30.643 "traddr": "10.0.0.2", 00:41:30.643 "adrfam": "ipv4", 00:41:30.643 "trsvcid": "4420", 00:41:30.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:30.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:30.643 "hdgst": false, 00:41:30.643 "ddgst": false 00:41:30.643 }, 00:41:30.643 "method": "bdev_nvme_attach_controller" 00:41:30.643 }' 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:30.643 01:59:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.903 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:30.903 ... 00:41:30.903 fio-3.35 00:41:30.903 Starting 3 threads 00:41:37.516 00:41:37.516 filename0: (groupid=0, jobs=1): err= 0: pid=1128312: Tue Oct 1 01:59:16 2024 00:41:37.516 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(148MiB/5044msec) 00:41:37.516 slat (nsec): min=4728, max=38597, avg=16168.99, stdev=3779.27 00:41:37.516 clat (usec): min=4494, max=57811, avg=12792.47, stdev=8775.64 00:41:37.516 lat (usec): min=4508, max=57823, avg=12808.64, stdev=8775.48 00:41:37.516 clat percentiles (usec): 00:41:37.516 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 7242], 20.00th=[ 8586], 00:41:37.516 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11338], 60.00th=[12125], 00:41:37.516 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15139], 95.00th=[17171], 00:41:37.516 | 99.00th=[53216], 99.50th=[55313], 99.90th=[57934], 99.95th=[57934], 00:41:37.516 | 99.99th=[57934] 00:41:37.516 bw ( KiB/s): min=24832, max=40960, per=34.93%, avg=30156.80, stdev=5353.33, samples=10 00:41:37.516 iops : min= 194, max= 320, avg=235.60, stdev=41.82, samples=10 00:41:37.516 lat (msec) : 10=33.87%, 20=61.56%, 50=2.29%, 100=2.29% 00:41:37.516 cpu : usr=94.80%, sys=4.74%, ctx=8, majf=0, minf=153 00:41:37.516 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.516 issued rwts: total=1181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.516 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.516 filename0: (groupid=0, jobs=1): err= 0: pid=1128313: Tue Oct 1 01:59:16 2024 00:41:37.516 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(137MiB/5005msec) 00:41:37.516 slat (usec): min=4, max=112, avg=17.03, stdev= 5.43 00:41:37.516 clat (usec): min=4880, max=89097, avg=13676.24, stdev=9800.25 00:41:37.516 lat (usec): min=4893, max=89115, avg=13693.27, stdev=9799.80 00:41:37.516 clat percentiles (usec): 00:41:37.516 | 1.00th=[ 5145], 5.00th=[ 6390], 10.00th=[ 8094], 20.00th=[ 8979], 00:41:37.516 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:41:37.516 | 70.00th=[13173], 80.00th=[14484], 90.00th=[15795], 95.00th=[47973], 00:41:37.516 | 99.00th=[52691], 99.50th=[54264], 99.90th=[88605], 99.95th=[88605], 00:41:37.516 | 99.99th=[88605] 00:41:37.516 bw ( KiB/s): min=17920, max=35328, per=32.42%, avg=27986.30, stdev=4740.43, samples=10 00:41:37.516 iops : min= 140, max= 276, avg=218.60, stdev=37.04, samples=10 00:41:37.517 lat (msec) : 10=28.83%, 20=65.60%, 50=2.37%, 100=3.19% 00:41:37.517 cpu : usr=93.96%, sys=5.60%, ctx=13, majf=0, minf=86 00:41:37.517 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.517 issued rwts: total=1096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.517 filename0: (groupid=0, jobs=1): err= 0: pid=1128314: Tue Oct 1 01:59:16 2024 00:41:37.517 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(141MiB/5044msec) 00:41:37.517 slat (nsec): min=4688, max=56924, avg=17503.28, stdev=4824.54 00:41:37.517 clat (usec): min=4953, max=90236, avg=13391.92, stdev=10339.61 00:41:37.517 lat (usec): min=4966, max=90250, avg=13409.42, stdev=10339.49 00:41:37.517 clat percentiles (usec): 00:41:37.517 | 1.00th=[ 5342], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8586], 00:41:37.517 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11338], 60.00th=[11731], 00:41:37.517 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14615], 95.00th=[48497], 00:41:37.517 | 99.00th=[53740], 99.50th=[54789], 99.90th=[60556], 99.95th=[90702], 00:41:37.517 | 99.99th=[90702] 00:41:37.517 bw ( KiB/s): min=23296, max=34048, per=33.27%, avg=28723.20, stdev=3511.75, samples=10 00:41:37.517 iops : min= 182, max= 266, avg=224.40, stdev=27.44, samples=10 00:41:37.517 lat (msec) : 10=30.31%, 20=63.20%, 50=2.76%, 100=3.73% 00:41:37.517 cpu : usr=91.10%, sys=7.24%, ctx=391, majf=0, minf=75 00:41:37.517 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.517 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.517 00:41:37.517 Run status group 0 (all jobs): 00:41:37.517 READ: bw=84.3MiB/s (88.4MB/s), 27.4MiB/s-29.3MiB/s (28.7MB/s-30.7MB/s), io=425MiB (446MB), run=5005-5044msec 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 bdev_null0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 [2024-10-01 01:59:16.569386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 bdev_null1 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 bdev_null2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:37.517 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:37.518 { 00:41:37.518 "params": { 00:41:37.518 "name": "Nvme$subsystem", 00:41:37.518 "trtype": "$TEST_TRANSPORT", 00:41:37.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.518 "adrfam": "ipv4", 00:41:37.518 "trsvcid": "$NVMF_PORT", 00:41:37.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.518 "hdgst": ${hdgst:-false}, 00:41:37.518 "ddgst": ${ddgst:-false} 00:41:37.518 }, 00:41:37.518 "method": "bdev_nvme_attach_controller" 00:41:37.518 } 00:41:37.518 EOF 00:41:37.518 )") 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:37.518 { 00:41:37.518 "params": { 00:41:37.518 "name": "Nvme$subsystem", 00:41:37.518 "trtype": "$TEST_TRANSPORT", 00:41:37.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.518 "adrfam": "ipv4", 00:41:37.518 "trsvcid": "$NVMF_PORT", 00:41:37.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.518 "hdgst": ${hdgst:-false}, 00:41:37.518 "ddgst": ${ddgst:-false} 00:41:37.518 }, 00:41:37.518 "method": "bdev_nvme_attach_controller" 00:41:37.518 } 00:41:37.518 EOF 00:41:37.518 )") 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:37.518 { 00:41:37.518 "params": { 00:41:37.518 "name": "Nvme$subsystem", 00:41:37.518 "trtype": "$TEST_TRANSPORT", 00:41:37.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.518 "adrfam": "ipv4", 00:41:37.518 "trsvcid": "$NVMF_PORT", 00:41:37.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.518 "hdgst": ${hdgst:-false}, 00:41:37.518 "ddgst": ${ddgst:-false} 00:41:37.518 }, 00:41:37.518 "method": "bdev_nvme_attach_controller" 00:41:37.518 } 00:41:37.518 EOF 00:41:37.518 )") 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:37.518 "params": { 00:41:37.518 "name": "Nvme0", 00:41:37.518 "trtype": "tcp", 00:41:37.518 "traddr": "10.0.0.2", 00:41:37.518 "adrfam": "ipv4", 00:41:37.518 "trsvcid": "4420", 00:41:37.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:37.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:37.518 "hdgst": false, 00:41:37.518 "ddgst": false 00:41:37.518 }, 00:41:37.518 "method": "bdev_nvme_attach_controller" 00:41:37.518 },{ 00:41:37.518 "params": { 00:41:37.518 "name": "Nvme1", 00:41:37.518 "trtype": "tcp", 00:41:37.518 "traddr": "10.0.0.2", 00:41:37.518 "adrfam": "ipv4", 00:41:37.518 "trsvcid": "4420", 00:41:37.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:37.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:37.518 "hdgst": false, 00:41:37.518 "ddgst": false 00:41:37.518 }, 00:41:37.518 "method": "bdev_nvme_attach_controller" 00:41:37.518 },{ 00:41:37.518 "params": { 00:41:37.518 "name": "Nvme2", 00:41:37.518 "trtype": "tcp", 00:41:37.518 "traddr": "10.0.0.2", 00:41:37.518 "adrfam": "ipv4", 00:41:37.518 "trsvcid": "4420", 00:41:37.518 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:37.518 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:37.518 "hdgst": false, 00:41:37.518 "ddgst": false 00:41:37.518 }, 00:41:37.518 "method": "bdev_nvme_attach_controller" 00:41:37.518 }' 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:37.518 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:37.519 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:37.519 01:59:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.519 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.519 ... 00:41:37.519 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.519 ... 00:41:37.519 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.519 ... 00:41:37.519 fio-3.35 00:41:37.519 Starting 24 threads 00:41:49.725 00:41:49.725 filename0: (groupid=0, jobs=1): err= 0: pid=1129172: Tue Oct 1 01:59:27 2024 00:41:49.725 read: IOPS=244, BW=979KiB/s (1002kB/s)(9856KiB/10069msec) 00:41:49.725 slat (usec): min=3, max=115, avg=44.85, stdev=25.80 00:41:49.725 clat (msec): min=19, max=350, avg=64.94, stdev=78.53 00:41:49.725 lat (msec): min=19, max=350, avg=64.99, stdev=78.52 00:41:49.725 clat percentiles (msec): 00:41:49.725 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.725 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.725 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.725 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 351], 00:41:49.725 | 99.99th=[ 351] 00:41:49.725 bw ( KiB/s): min= 144, max= 1920, per=4.28%, avg=979.20, stdev=834.89, samples=20 00:41:49.725 iops : min= 36, max= 480, avg=244.80, stdev=208.72, samples=20 00:41:49.725 lat (msec) : 20=0.65%, 50=85.06%, 100=0.65%, 250=0.81%, 500=12.82% 00:41:49.725 cpu : usr=98.39%, sys=1.18%, ctx=37, majf=0, minf=29 00:41:49.725 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:49.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.725 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.725 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.725 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.725 filename0: (groupid=0, jobs=1): err= 0: pid=1129173: Tue Oct 1 01:59:27 2024 00:41:49.725 read: IOPS=229, BW=919KiB/s (941kB/s)(9216KiB/10026msec) 00:41:49.725 slat (usec): min=9, max=111, avg=47.70, stdev=23.56 00:41:49.725 clat (msec): min=21, max=460, avg=69.20, stdev=106.79 00:41:49.725 lat (msec): min=21, max=460, avg=69.24, stdev=106.78 00:41:49.725 clat percentiles (msec): 00:41:49.725 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.725 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.725 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 77], 95.00th=[ 388], 00:41:49.725 | 99.00th=[ 430], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:41:49.725 | 99.99th=[ 460] 00:41:49.725 bw ( KiB/s): min= 128, max= 1920, per=4.00%, avg=915.20, stdev=858.42, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=228.80, stdev=214.61, samples=20 00:41:49.726 lat (msec) : 50=88.80%, 100=1.48%, 500=9.72% 00:41:49.726 cpu : usr=98.02%, sys=1.32%, ctx=36, majf=0, minf=40 00:41:49.726 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename0: (groupid=0, jobs=1): err= 0: pid=1129174: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=241, BW=967KiB/s (991kB/s)(9728KiB/10055msec) 00:41:49.726 slat (usec): min=8, max=123, avg=36.56, stdev=20.06 00:41:49.726 clat (msec): min=19, max=407, avg=65.78, stdev=82.74 00:41:49.726 lat (msec): min=19, max=407, avg=65.81, stdev=82.74 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 268], 00:41:49.726 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:41:49.726 | 99.99th=[ 409] 00:41:49.726 bw ( KiB/s): min= 128, max= 1920, per=4.22%, avg=966.40, stdev=835.09, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=241.60, stdev=208.77, samples=20 00:41:49.726 lat (msec) : 20=0.08%, 50=85.94%, 100=0.82%, 250=0.08%, 500=13.08% 00:41:49.726 cpu : usr=97.93%, sys=1.39%, ctx=91, majf=0, minf=28 00:41:49.726 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename0: (groupid=0, jobs=1): err= 0: pid=1129175: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=241, BW=967KiB/s (990kB/s)(9712KiB/10047msec) 00:41:49.726 slat (usec): min=8, max=120, avg=39.74, stdev=28.25 00:41:49.726 clat (msec): min=31, max=391, avg=65.73, stdev=81.03 00:41:49.726 lat (msec): min=31, max=391, avg=65.77, stdev=81.02 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 257], 95.00th=[ 264], 00:41:49.726 | 99.00th=[ 326], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 393], 00:41:49.726 | 99.99th=[ 393] 00:41:49.726 bw ( KiB/s): min= 128, max= 1920, per=4.21%, avg=964.80, stdev=836.52, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=241.20, stdev=209.13, samples=20 00:41:49.726 lat (msec) : 50=86.33%, 250=2.06%, 500=11.61% 00:41:49.726 cpu : usr=98.02%, sys=1.56%, ctx=33, majf=0, minf=29 00:41:49.726 IO depths : 1=5.5%, 2=11.0%, 4=22.8%, 8=53.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename0: (groupid=0, jobs=1): err= 0: pid=1129176: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=241, BW=964KiB/s (987kB/s)(9728KiB/10090msec) 00:41:49.726 slat (usec): min=8, max=114, avg=38.20, stdev=23.25 00:41:49.726 clat (msec): min=31, max=378, avg=65.80, stdev=80.31 00:41:49.726 lat (msec): min=31, max=378, avg=65.83, stdev=80.30 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 257], 95.00th=[ 264], 00:41:49.726 | 99.00th=[ 288], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 380], 00:41:49.726 | 99.99th=[ 380] 00:41:49.726 bw ( KiB/s): min= 128, max= 1920, per=4.22%, avg=966.40, stdev=835.88, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=241.60, stdev=208.97, samples=20 00:41:49.726 lat (msec) : 50=86.18%, 250=1.32%, 500=12.50% 00:41:49.726 cpu : usr=98.34%, sys=1.24%, ctx=47, majf=0, minf=23 00:41:49.726 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename0: (groupid=0, jobs=1): err= 0: pid=1129177: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=231, BW=925KiB/s (948kB/s)(9280KiB/10028msec) 00:41:49.726 slat (usec): min=8, max=130, avg=40.78, stdev=18.18 00:41:49.726 clat (msec): min=31, max=529, avg=68.79, stdev=104.20 00:41:49.726 lat (msec): min=31, max=529, avg=68.83, stdev=104.19 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 262], 95.00th=[ 388], 00:41:49.726 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 523], 99.95th=[ 531], 00:41:49.726 | 99.99th=[ 531] 00:41:49.726 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=921.60, stdev=865.75, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=230.40, stdev=216.44, samples=20 00:41:49.726 lat (msec) : 50=88.97%, 100=0.69%, 250=0.09%, 500=9.83%, 750=0.43% 00:41:49.726 cpu : usr=96.57%, sys=1.86%, ctx=121, majf=0, minf=31 00:41:49.726 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename0: (groupid=0, jobs=1): err= 0: pid=1129178: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=233, BW=932KiB/s (954kB/s)(9352KiB/10033msec) 00:41:49.726 slat (usec): min=8, max=115, avg=43.28, stdev=27.53 00:41:49.726 clat (msec): min=20, max=530, avg=68.31, stdev=98.86 00:41:49.726 lat (msec): min=20, max=530, avg=68.36, stdev=98.85 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 266], 95.00th=[ 359], 00:41:49.726 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 456], 99.95th=[ 531], 00:41:49.726 | 99.99th=[ 531] 00:41:49.726 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=928.95, stdev=852.22, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=232.20, stdev=213.03, samples=20 00:41:49.726 lat (msec) : 50=87.13%, 100=1.67%, 250=0.60%, 500=10.52%, 750=0.09% 00:41:49.726 cpu : usr=98.32%, sys=1.28%, ctx=16, majf=0, minf=22 00:41:49.726 IO depths : 1=3.2%, 2=7.4%, 4=17.5%, 8=61.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=92.5%, 8=3.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename0: (groupid=0, jobs=1): err= 0: pid=1129179: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=243, BW=972KiB/s (996kB/s)(9776KiB/10055msec) 00:41:49.726 slat (nsec): min=8242, max=99703, avg=28356.55, stdev=14855.70 00:41:49.726 clat (msec): min=21, max=371, avg=65.39, stdev=79.44 00:41:49.726 lat (msec): min=21, max=371, avg=65.42, stdev=79.43 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.726 | 99.00th=[ 288], 99.50th=[ 347], 99.90th=[ 372], 99.95th=[ 372], 00:41:49.726 | 99.99th=[ 372] 00:41:49.726 bw ( KiB/s): min= 176, max= 1920, per=4.24%, avg=971.20, stdev=830.19, samples=20 00:41:49.726 iops : min= 44, max= 480, avg=242.80, stdev=207.55, samples=20 00:41:49.726 lat (msec) : 50=85.60%, 100=0.82%, 250=1.47%, 500=12.11% 00:41:49.726 cpu : usr=97.42%, sys=1.61%, ctx=95, majf=0, minf=35 00:41:49.726 IO depths : 1=5.4%, 2=11.1%, 4=23.4%, 8=53.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename1: (groupid=0, jobs=1): err= 0: pid=1129180: Tue Oct 1 01:59:27 2024 00:41:49.726 read: IOPS=240, BW=964KiB/s (987kB/s)(9688KiB/10052msec) 00:41:49.726 slat (usec): min=6, max=136, avg=54.27, stdev=31.37 00:41:49.726 clat (msec): min=22, max=508, avg=65.86, stdev=83.62 00:41:49.726 lat (msec): min=22, max=508, avg=65.92, stdev=83.61 00:41:49.726 clat percentiles (msec): 00:41:49.726 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:49.726 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.726 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 257], 95.00th=[ 264], 00:41:49.726 | 99.00th=[ 368], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 510], 00:41:49.726 | 99.99th=[ 510] 00:41:49.726 bw ( KiB/s): min= 128, max= 1920, per=4.21%, avg=962.40, stdev=838.76, samples=20 00:41:49.726 iops : min= 32, max= 480, avg=240.60, stdev=209.69, samples=20 00:41:49.726 lat (msec) : 50=86.54%, 250=1.82%, 500=11.56%, 750=0.08% 00:41:49.726 cpu : usr=97.95%, sys=1.33%, ctx=118, majf=0, minf=27 00:41:49.726 IO depths : 1=5.5%, 2=11.0%, 4=23.0%, 8=53.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:49.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.726 issued rwts: total=2422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.726 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.726 filename1: (groupid=0, jobs=1): err= 0: pid=1129181: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=231, BW=925KiB/s (947kB/s)(9280KiB/10032msec) 00:41:49.727 slat (usec): min=8, max=102, avg=35.69, stdev=14.36 00:41:49.727 clat (msec): min=25, max=529, avg=68.88, stdev=104.08 00:41:49.727 lat (msec): min=25, max=529, avg=68.91, stdev=104.09 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 264], 95.00th=[ 384], 00:41:49.727 | 99.00th=[ 422], 99.50th=[ 426], 99.90th=[ 523], 99.95th=[ 531], 00:41:49.727 | 99.99th=[ 531] 00:41:49.727 bw ( KiB/s): min= 112, max= 1920, per=4.03%, avg=921.75, stdev=865.77, samples=20 00:41:49.727 iops : min= 28, max= 480, avg=230.40, stdev=216.41, samples=20 00:41:49.727 lat (msec) : 50=88.97%, 100=0.69%, 250=0.09%, 500=10.00%, 750=0.26% 00:41:49.727 cpu : usr=97.04%, sys=1.83%, ctx=89, majf=0, minf=33 00:41:49.727 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename1: (groupid=0, jobs=1): err= 0: pid=1129182: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.86MiB/10074msec) 00:41:49.727 slat (usec): min=5, max=116, avg=37.88, stdev=27.84 00:41:49.727 clat (msec): min=5, max=369, avg=63.53, stdev=77.70 00:41:49.727 lat (msec): min=5, max=369, avg=63.57, stdev=77.69 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 257], 95.00th=[ 264], 00:41:49.727 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 368], 00:41:49.727 | 99.99th=[ 368] 00:41:49.727 bw ( KiB/s): min= 240, max= 2152, per=4.38%, avg=1002.80, stdev=851.40, samples=20 00:41:49.727 iops : min= 60, max= 538, avg=250.70, stdev=212.85, samples=20 00:41:49.727 lat (msec) : 10=0.71%, 20=1.78%, 50=83.63%, 250=1.98%, 500=11.89% 00:41:49.727 cpu : usr=98.06%, sys=1.53%, ctx=23, majf=0, minf=30 00:41:49.727 IO depths : 1=5.2%, 2=11.2%, 4=24.0%, 8=52.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename1: (groupid=0, jobs=1): err= 0: pid=1129183: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=231, BW=925KiB/s (947kB/s)(9280KiB/10032msec) 00:41:49.727 slat (nsec): min=8572, max=98582, avg=37771.56, stdev=14607.83 00:41:49.727 clat (msec): min=32, max=544, avg=68.84, stdev=106.13 00:41:49.727 lat (msec): min=32, max=544, avg=68.88, stdev=106.12 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 101], 95.00th=[ 393], 00:41:49.727 | 99.00th=[ 426], 99.50th=[ 510], 99.90th=[ 527], 99.95th=[ 542], 00:41:49.727 | 99.99th=[ 542] 00:41:49.727 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=921.60, stdev=865.53, samples=20 00:41:49.727 iops : min= 32, max= 480, avg=230.40, stdev=216.38, samples=20 00:41:49.727 lat (msec) : 50=88.97%, 100=0.69%, 250=0.69%, 500=9.14%, 750=0.52% 00:41:49.727 cpu : usr=98.20%, sys=1.22%, ctx=57, majf=0, minf=24 00:41:49.727 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename1: (groupid=0, jobs=1): err= 0: pid=1129184: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=243, BW=974KiB/s (997kB/s)(9792KiB/10055msec) 00:41:49.727 slat (nsec): min=6114, max=96612, avg=29562.87, stdev=15148.58 00:41:49.727 clat (msec): min=19, max=363, avg=65.41, stdev=78.68 00:41:49.727 lat (msec): min=19, max=363, avg=65.44, stdev=78.67 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.727 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 363], 00:41:49.727 | 99.99th=[ 363] 00:41:49.727 bw ( KiB/s): min= 144, max= 1920, per=4.25%, avg=972.80, stdev=828.80, samples=20 00:41:49.727 iops : min= 36, max= 480, avg=243.20, stdev=207.20, samples=20 00:41:49.727 lat (msec) : 20=0.16%, 50=85.46%, 100=0.65%, 250=0.74%, 500=12.99% 00:41:49.727 cpu : usr=97.72%, sys=1.52%, ctx=92, majf=0, minf=46 00:41:49.727 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename1: (groupid=0, jobs=1): err= 0: pid=1129185: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=243, BW=974KiB/s (997kB/s)(9792KiB/10055msec) 00:41:49.727 slat (usec): min=8, max=111, avg=22.35, stdev=21.21 00:41:49.727 clat (msec): min=19, max=380, avg=65.43, stdev=80.01 00:41:49.727 lat (msec): min=20, max=380, avg=65.45, stdev=80.00 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 264], 00:41:49.727 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 380], 00:41:49.727 | 99.99th=[ 380] 00:41:49.727 bw ( KiB/s): min= 128, max= 1920, per=4.25%, avg=972.80, stdev=841.00, samples=20 00:41:49.727 iops : min= 32, max= 480, avg=243.20, stdev=210.25, samples=20 00:41:49.727 lat (msec) : 20=0.04%, 50=86.23%, 250=1.80%, 500=11.93% 00:41:49.727 cpu : usr=98.36%, sys=1.22%, ctx=17, majf=0, minf=55 00:41:49.727 IO depths : 1=5.4%, 2=11.1%, 4=23.2%, 8=53.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename1: (groupid=0, jobs=1): err= 0: pid=1129186: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=241, BW=967KiB/s (991kB/s)(9728KiB/10055msec) 00:41:49.727 slat (nsec): min=8283, max=65178, avg=21837.00, stdev=8506.04 00:41:49.727 clat (msec): min=21, max=378, avg=65.91, stdev=80.24 00:41:49.727 lat (msec): min=21, max=378, avg=65.93, stdev=80.24 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 257], 95.00th=[ 264], 00:41:49.727 | 99.00th=[ 288], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 380], 00:41:49.727 | 99.99th=[ 380] 00:41:49.727 bw ( KiB/s): min= 128, max= 1920, per=4.22%, avg=966.40, stdev=835.88, samples=20 00:41:49.727 iops : min= 32, max= 480, avg=241.60, stdev=208.97, samples=20 00:41:49.727 lat (msec) : 50=86.10%, 100=0.08%, 250=1.32%, 500=12.50% 00:41:49.727 cpu : usr=98.31%, sys=1.17%, ctx=36, majf=0, minf=34 00:41:49.727 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename1: (groupid=0, jobs=1): err= 0: pid=1129187: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=241, BW=965KiB/s (988kB/s)(9688KiB/10044msec) 00:41:49.727 slat (usec): min=8, max=113, avg=43.35, stdev=22.11 00:41:49.727 clat (msec): min=29, max=491, avg=65.84, stdev=82.72 00:41:49.727 lat (msec): min=29, max=491, avg=65.89, stdev=82.71 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.727 | 99.00th=[ 326], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 493], 00:41:49.727 | 99.99th=[ 493] 00:41:49.727 bw ( KiB/s): min= 128, max= 1920, per=4.21%, avg=962.40, stdev=838.86, samples=20 00:41:49.727 iops : min= 32, max= 480, avg=240.60, stdev=209.71, samples=20 00:41:49.727 lat (msec) : 50=86.46%, 100=0.08%, 250=1.73%, 500=11.73% 00:41:49.727 cpu : usr=97.96%, sys=1.46%, ctx=62, majf=0, minf=33 00:41:49.727 IO depths : 1=5.4%, 2=11.1%, 4=23.5%, 8=52.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:49.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 complete : 0=0.0%, 4=93.6%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.727 issued rwts: total=2422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.727 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.727 filename2: (groupid=0, jobs=1): err= 0: pid=1129188: Tue Oct 1 01:59:27 2024 00:41:49.727 read: IOPS=241, BW=967KiB/s (991kB/s)(9728KiB/10055msec) 00:41:49.727 slat (nsec): min=7646, max=90609, avg=32517.66, stdev=16624.06 00:41:49.727 clat (msec): min=26, max=407, avg=65.81, stdev=82.54 00:41:49.727 lat (msec): min=26, max=407, avg=65.84, stdev=82.54 00:41:49.727 clat percentiles (msec): 00:41:49.727 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.727 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.727 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 268], 00:41:49.727 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:41:49.727 | 99.99th=[ 409] 00:41:49.728 bw ( KiB/s): min= 128, max= 1920, per=4.22%, avg=966.40, stdev=835.09, samples=20 00:41:49.728 iops : min= 32, max= 480, avg=241.60, stdev=208.77, samples=20 00:41:49.728 lat (msec) : 50=86.18%, 100=0.66%, 250=0.08%, 500=13.08% 00:41:49.728 cpu : usr=97.41%, sys=1.61%, ctx=63, majf=0, minf=47 00:41:49.728 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129189: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=240, BW=961KiB/s (984kB/s)(9656KiB/10044msec) 00:41:49.728 slat (usec): min=8, max=108, avg=34.28, stdev=13.38 00:41:49.728 clat (msec): min=25, max=503, avg=66.18, stdev=84.88 00:41:49.728 lat (msec): min=25, max=503, avg=66.21, stdev=84.88 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.728 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 418], 99.95th=[ 502], 00:41:49.728 | 99.99th=[ 502] 00:41:49.728 bw ( KiB/s): min= 128, max= 1920, per=4.19%, avg=959.35, stdev=841.78, samples=20 00:41:49.728 iops : min= 32, max= 480, avg=239.80, stdev=210.41, samples=20 00:41:49.728 lat (msec) : 50=86.74%, 100=0.08%, 250=2.15%, 500=10.94%, 750=0.08% 00:41:49.728 cpu : usr=96.05%, sys=2.33%, ctx=584, majf=0, minf=36 00:41:49.728 IO depths : 1=4.6%, 2=10.2%, 4=22.9%, 8=54.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129190: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=240, BW=961KiB/s (984kB/s)(9648KiB/10042msec) 00:41:49.728 slat (usec): min=8, max=104, avg=35.38, stdev=19.60 00:41:49.728 clat (msec): min=30, max=497, avg=66.24, stdev=83.27 00:41:49.728 lat (msec): min=30, max=497, avg=66.27, stdev=83.26 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.728 | 99.00th=[ 363], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 498], 00:41:49.728 | 99.99th=[ 498] 00:41:49.728 bw ( KiB/s): min= 128, max= 1920, per=4.19%, avg=958.40, stdev=835.23, samples=20 00:41:49.728 iops : min= 32, max= 480, avg=239.60, stdev=208.81, samples=20 00:41:49.728 lat (msec) : 50=85.82%, 100=0.66%, 250=1.58%, 500=11.94% 00:41:49.728 cpu : usr=98.25%, sys=1.31%, ctx=26, majf=0, minf=25 00:41:49.728 IO depths : 1=5.3%, 2=10.8%, 4=22.6%, 8=54.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129191: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=230, BW=924KiB/s (946kB/s)(9272KiB/10036msec) 00:41:49.728 slat (nsec): min=4527, max=66798, avg=30270.72, stdev=9867.04 00:41:49.728 clat (msec): min=27, max=544, avg=69.00, stdev=106.41 00:41:49.728 lat (msec): min=27, max=544, avg=69.03, stdev=106.41 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 102], 95.00th=[ 393], 00:41:49.728 | 99.00th=[ 426], 99.50th=[ 510], 99.90th=[ 527], 99.95th=[ 542], 00:41:49.728 | 99.99th=[ 542] 00:41:49.728 bw ( KiB/s): min= 128, max= 1920, per=4.02%, avg=920.80, stdev=866.19, samples=20 00:41:49.728 iops : min= 32, max= 480, avg=230.20, stdev=216.55, samples=20 00:41:49.728 lat (msec) : 50=89.04%, 100=0.69%, 250=0.69%, 500=9.06%, 750=0.52% 00:41:49.728 cpu : usr=98.51%, sys=1.08%, ctx=16, majf=0, minf=31 00:41:49.728 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129192: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=247, BW=991KiB/s (1015kB/s)(9984KiB/10073msec) 00:41:49.728 slat (usec): min=4, max=116, avg=38.86, stdev=28.34 00:41:49.728 clat (msec): min=6, max=376, avg=64.20, stdev=78.33 00:41:49.728 lat (msec): min=6, max=376, avg=64.24, stdev=78.32 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.728 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 376], 00:41:49.728 | 99.99th=[ 376] 00:41:49.728 bw ( KiB/s): min= 144, max= 2048, per=4.34%, avg=992.00, stdev=849.83, samples=20 00:41:49.728 iops : min= 36, max= 512, avg=248.00, stdev=212.46, samples=20 00:41:49.728 lat (msec) : 10=1.16%, 20=1.20%, 50=83.53%, 100=0.64%, 250=0.88% 00:41:49.728 lat (msec) : 500=12.58% 00:41:49.728 cpu : usr=98.33%, sys=1.27%, ctx=12, majf=0, minf=40 00:41:49.728 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129193: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=235, BW=942KiB/s (965kB/s)(9472KiB/10055msec) 00:41:49.728 slat (usec): min=8, max=116, avg=29.23, stdev=17.78 00:41:49.728 clat (msec): min=22, max=520, avg=67.64, stdev=96.49 00:41:49.728 lat (msec): min=22, max=520, avg=67.67, stdev=96.50 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 257], 95.00th=[ 355], 00:41:49.728 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 518], 99.95th=[ 518], 00:41:49.728 | 99.99th=[ 523] 00:41:49.728 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=940.80, stdev=859.98, samples=20 00:41:49.728 iops : min= 32, max= 480, avg=235.20, stdev=214.99, samples=20 00:41:49.728 lat (msec) : 50=88.43%, 100=0.08%, 250=1.18%, 500=10.05%, 750=0.25% 00:41:49.728 cpu : usr=98.37%, sys=1.20%, ctx=18, majf=0, minf=38 00:41:49.728 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129194: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=231, BW=925KiB/s (947kB/s)(9280KiB/10034msec) 00:41:49.728 slat (usec): min=8, max=119, avg=55.70, stdev=25.18 00:41:49.728 clat (msec): min=31, max=544, avg=68.70, stdev=106.31 00:41:49.728 lat (msec): min=31, max=544, avg=68.75, stdev=106.31 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 101], 95.00th=[ 393], 00:41:49.728 | 99.00th=[ 426], 99.50th=[ 514], 99.90th=[ 527], 99.95th=[ 542], 00:41:49.728 | 99.99th=[ 542] 00:41:49.728 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=921.60, stdev=865.53, samples=20 00:41:49.728 iops : min= 32, max= 480, avg=230.40, stdev=216.38, samples=20 00:41:49.728 lat (msec) : 50=88.97%, 100=0.69%, 250=0.78%, 500=8.97%, 750=0.60% 00:41:49.728 cpu : usr=97.20%, sys=1.71%, ctx=122, majf=0, minf=26 00:41:49.728 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 filename2: (groupid=0, jobs=1): err= 0: pid=1129195: Tue Oct 1 01:59:27 2024 00:41:49.728 read: IOPS=243, BW=974KiB/s (997kB/s)(9792KiB/10055msec) 00:41:49.728 slat (usec): min=8, max=171, avg=33.30, stdev=20.24 00:41:49.728 clat (msec): min=19, max=355, avg=65.39, stdev=78.67 00:41:49.728 lat (msec): min=20, max=355, avg=65.42, stdev=78.66 00:41:49.728 clat percentiles (msec): 00:41:49.728 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:49.728 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:49.728 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 259], 95.00th=[ 266], 00:41:49.728 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 355], 00:41:49.728 | 99.99th=[ 355] 00:41:49.728 bw ( KiB/s): min= 144, max= 1920, per=4.25%, avg=972.80, stdev=828.80, samples=20 00:41:49.728 iops : min= 36, max= 480, avg=243.20, stdev=207.20, samples=20 00:41:49.728 lat (msec) : 20=0.04%, 50=85.58%, 100=0.65%, 250=0.82%, 500=12.91% 00:41:49.728 cpu : usr=98.55%, sys=1.05%, ctx=15, majf=0, minf=22 00:41:49.728 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.728 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.728 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:49.728 00:41:49.729 Run status group 0 (all jobs): 00:41:49.729 READ: bw=22.3MiB/s (23.4MB/s), 919KiB/s-1002KiB/s (941kB/s-1026kB/s), io=225MiB (236MB), run=10026-10090msec 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 bdev_null0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 [2024-10-01 01:59:28.342664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 bdev_null1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:49.729 { 00:41:49.729 "params": { 00:41:49.729 "name": "Nvme$subsystem", 00:41:49.729 "trtype": "$TEST_TRANSPORT", 00:41:49.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.729 "adrfam": "ipv4", 00:41:49.729 "trsvcid": "$NVMF_PORT", 00:41:49.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.729 "hdgst": ${hdgst:-false}, 00:41:49.729 "ddgst": ${ddgst:-false} 00:41:49.729 }, 00:41:49.729 "method": "bdev_nvme_attach_controller" 00:41:49.729 } 00:41:49.729 EOF 00:41:49.729 )") 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:49.729 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:49.730 { 00:41:49.730 "params": { 00:41:49.730 "name": "Nvme$subsystem", 00:41:49.730 "trtype": "$TEST_TRANSPORT", 00:41:49.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.730 "adrfam": "ipv4", 00:41:49.730 "trsvcid": "$NVMF_PORT", 00:41:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.730 "hdgst": ${hdgst:-false}, 00:41:49.730 "ddgst": ${ddgst:-false} 00:41:49.730 }, 00:41:49.730 "method": "bdev_nvme_attach_controller" 00:41:49.730 } 00:41:49.730 EOF 00:41:49.730 )") 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:49.730 "params": { 00:41:49.730 "name": "Nvme0", 00:41:49.730 "trtype": "tcp", 00:41:49.730 "traddr": "10.0.0.2", 00:41:49.730 "adrfam": "ipv4", 00:41:49.730 "trsvcid": "4420", 00:41:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:49.730 "hdgst": false, 00:41:49.730 "ddgst": false 00:41:49.730 }, 00:41:49.730 "method": "bdev_nvme_attach_controller" 00:41:49.730 },{ 00:41:49.730 "params": { 00:41:49.730 "name": "Nvme1", 00:41:49.730 "trtype": "tcp", 00:41:49.730 "traddr": "10.0.0.2", 00:41:49.730 "adrfam": "ipv4", 00:41:49.730 "trsvcid": "4420", 00:41:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:49.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:49.730 "hdgst": false, 00:41:49.730 "ddgst": false 00:41:49.730 }, 00:41:49.730 "method": "bdev_nvme_attach_controller" 00:41:49.730 }' 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:49.730 01:59:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.730 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:49.730 ... 00:41:49.730 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:49.730 ... 00:41:49.730 fio-3.35 00:41:49.730 Starting 4 threads 00:41:54.992 00:41:54.992 filename0: (groupid=0, jobs=1): err= 0: pid=1130576: Tue Oct 1 01:59:34 2024 00:41:54.992 read: IOPS=1809, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5001msec) 00:41:54.992 slat (nsec): min=7585, max=79271, avg=18285.74, stdev=8547.30 00:41:54.992 clat (usec): min=848, max=8177, avg=4359.79, stdev=576.81 00:41:54.992 lat (usec): min=864, max=8186, avg=4378.07, stdev=576.97 00:41:54.992 clat percentiles (usec): 00:41:54.992 | 1.00th=[ 2835], 5.00th=[ 3523], 10.00th=[ 3851], 20.00th=[ 4113], 00:41:54.992 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:41:54.992 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5342], 00:41:54.992 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 8094], 00:41:54.992 | 99.99th=[ 8160] 00:41:54.992 bw ( KiB/s): min=13648, max=15008, per=25.15%, avg=14511.67, stdev=432.16, samples=9 00:41:54.992 iops : min= 1706, max= 1876, avg=1813.89, stdev=53.98, samples=9 00:41:54.992 lat (usec) : 1000=0.04% 00:41:54.992 lat (msec) : 2=0.24%, 4=13.18%, 10=86.53% 00:41:54.992 cpu : usr=92.38%, sys=6.26%, ctx=223, majf=0, minf=9 00:41:54.992 IO depths : 1=0.1%, 2=10.7%, 4=62.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 issued rwts: total=9050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.992 filename0: (groupid=0, jobs=1): err= 0: pid=1130577: Tue Oct 1 01:59:34 2024 00:41:54.992 read: IOPS=1789, BW=14.0MiB/s (14.7MB/s)(69.9MiB/5002msec) 00:41:54.992 slat (nsec): min=7375, max=89419, avg=17628.92, stdev=9348.60 00:41:54.992 clat (usec): min=896, max=8369, avg=4409.09, stdev=650.72 00:41:54.992 lat (usec): min=919, max=8387, avg=4426.72, stdev=650.57 00:41:54.992 clat percentiles (usec): 00:41:54.992 | 1.00th=[ 2835], 5.00th=[ 3621], 10.00th=[ 3949], 20.00th=[ 4146], 00:41:54.992 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:41:54.992 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5604], 00:41:54.992 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 7898], 99.95th=[ 8029], 00:41:54.992 | 99.99th=[ 8356] 00:41:54.992 bw ( KiB/s): min=13696, max=15664, per=24.95%, avg=14396.78, stdev=630.80, samples=9 00:41:54.992 iops : min= 1712, max= 1958, avg=1799.56, stdev=78.87, samples=9 00:41:54.992 lat (usec) : 1000=0.07% 00:41:54.992 lat (msec) : 2=0.26%, 4=11.01%, 10=88.67% 00:41:54.992 cpu : usr=94.20%, sys=5.02%, ctx=8, majf=0, minf=9 00:41:54.992 IO depths : 1=0.2%, 2=13.0%, 4=59.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 issued rwts: total=8950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.992 filename1: (groupid=0, jobs=1): err= 0: pid=1130578: Tue Oct 1 01:59:34 2024 00:41:54.992 read: IOPS=1793, BW=14.0MiB/s (14.7MB/s)(70.1MiB/5003msec) 00:41:54.992 slat (nsec): min=7391, max=92229, avg=17869.94, stdev=9562.77 00:41:54.992 clat (usec): min=805, max=8336, avg=4394.93, stdev=621.72 00:41:54.992 lat (usec): min=818, max=8364, avg=4412.80, stdev=621.29 00:41:54.992 clat percentiles (usec): 00:41:54.992 | 1.00th=[ 2933], 5.00th=[ 3654], 10.00th=[ 3949], 20.00th=[ 4113], 00:41:54.992 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:41:54.992 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5538], 00:41:54.992 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 7963], 99.95th=[ 8291], 00:41:54.992 | 99.99th=[ 8356] 00:41:54.992 bw ( KiB/s): min=13152, max=15008, per=24.88%, avg=14355.20, stdev=582.96, samples=10 00:41:54.992 iops : min= 1644, max= 1876, avg=1794.40, stdev=72.87, samples=10 00:41:54.992 lat (usec) : 1000=0.02% 00:41:54.992 lat (msec) : 2=0.41%, 4=11.74%, 10=87.82% 00:41:54.992 cpu : usr=94.32%, sys=5.04%, ctx=9, majf=0, minf=10 00:41:54.992 IO depths : 1=0.1%, 2=14.1%, 4=59.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 issued rwts: total=8975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.992 filename1: (groupid=0, jobs=1): err= 0: pid=1130579: Tue Oct 1 01:59:34 2024 00:41:54.992 read: IOPS=1819, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5003msec) 00:41:54.992 slat (nsec): min=7321, max=73504, avg=16167.59, stdev=8512.12 00:41:54.992 clat (usec): min=760, max=8321, avg=4342.24, stdev=589.22 00:41:54.992 lat (usec): min=773, max=8336, avg=4358.40, stdev=589.15 00:41:54.992 clat percentiles (usec): 00:41:54.992 | 1.00th=[ 2704], 5.00th=[ 3523], 10.00th=[ 3851], 20.00th=[ 4113], 00:41:54.992 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:41:54.992 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5276], 00:41:54.992 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 7767], 00:41:54.992 | 99.99th=[ 8291] 00:41:54.992 bw ( KiB/s): min=13760, max=15280, per=25.23%, avg=14556.80, stdev=483.06, samples=10 00:41:54.992 iops : min= 1720, max= 1910, avg=1819.60, stdev=60.38, samples=10 00:41:54.992 lat (usec) : 1000=0.03% 00:41:54.992 lat (msec) : 2=0.38%, 4=13.42%, 10=86.16% 00:41:54.992 cpu : usr=93.90%, sys=5.46%, ctx=13, majf=0, minf=9 00:41:54.992 IO depths : 1=0.1%, 2=12.3%, 4=60.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.992 issued rwts: total=9103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.992 00:41:54.992 Run status group 0 (all jobs): 00:41:54.992 READ: bw=56.3MiB/s (59.1MB/s), 14.0MiB/s-14.2MiB/s (14.7MB/s-14.9MB/s), io=282MiB (296MB), run=5001-5003msec 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:54.992 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.993 00:41:54.993 real 0m24.452s 00:41:54.993 user 4m33.752s 00:41:54.993 sys 0m6.449s 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.993 ************************************ 00:41:54.993 END TEST fio_dif_rand_params 00:41:54.993 ************************************ 00:41:54.993 01:59:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:54.993 01:59:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:54.993 01:59:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:54.993 ************************************ 00:41:54.993 START TEST fio_dif_digest 00:41:54.993 ************************************ 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.993 bdev_null0 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.993 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.251 [2024-10-01 01:59:34.864722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:55.251 { 00:41:55.251 "params": { 00:41:55.251 "name": "Nvme$subsystem", 00:41:55.251 "trtype": "$TEST_TRANSPORT", 00:41:55.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:55.251 "adrfam": "ipv4", 00:41:55.251 "trsvcid": "$NVMF_PORT", 00:41:55.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:55.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:55.251 "hdgst": ${hdgst:-false}, 00:41:55.251 "ddgst": ${ddgst:-false} 00:41:55.251 }, 00:41:55.251 "method": "bdev_nvme_attach_controller" 00:41:55.251 } 00:41:55.251 EOF 00:41:55.251 )") 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:41:55.251 01:59:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:55.251 "params": { 00:41:55.251 "name": "Nvme0", 00:41:55.251 "trtype": "tcp", 00:41:55.252 "traddr": "10.0.0.2", 00:41:55.252 "adrfam": "ipv4", 00:41:55.252 "trsvcid": "4420", 00:41:55.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:55.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:55.252 "hdgst": true, 00:41:55.252 "ddgst": true 00:41:55.252 }, 00:41:55.252 "method": "bdev_nvme_attach_controller" 00:41:55.252 }' 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:55.252 01:59:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.510 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:55.510 ... 00:41:55.510 fio-3.35 00:41:55.510 Starting 3 threads 00:42:07.719 00:42:07.719 filename0: (groupid=0, jobs=1): err= 0: pid=1131326: Tue Oct 1 01:59:45 2024 00:42:07.719 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10046msec) 00:42:07.719 slat (nsec): min=4593, max=88493, avg=21291.75, stdev=8801.27 00:42:07.719 clat (usec): min=10662, max=57475, avg=14961.49, stdev=2355.72 00:42:07.719 lat (usec): min=10676, max=57486, avg=14982.78, stdev=2356.06 00:42:07.719 clat percentiles (usec): 00:42:07.719 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13435], 20.00th=[13829], 00:42:07.719 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:42:07.719 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:42:07.719 | 99.00th=[17695], 99.50th=[18220], 99.90th=[56361], 99.95th=[56886], 00:42:07.719 | 99.99th=[57410] 00:42:07.719 bw ( KiB/s): min=24832, max=27136, per=33.87%, avg=25664.00, stdev=658.59, samples=20 00:42:07.719 iops : min= 194, max= 212, avg=200.50, stdev= 5.15, samples=20 00:42:07.719 lat (msec) : 20=99.75%, 100=0.25% 00:42:07.719 cpu : usr=90.30%, sys=8.28%, ctx=148, majf=0, minf=220 00:42:07.719 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.719 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:07.719 filename0: (groupid=0, jobs=1): err= 0: pid=1131327: Tue Oct 1 01:59:45 2024 00:42:07.719 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(244MiB/10047msec) 00:42:07.719 slat (nsec): min=4552, max=48724, avg=15701.42, stdev=4762.59 00:42:07.719 clat (usec): min=10096, max=51269, avg=15431.20, stdev=1656.80 00:42:07.719 lat (usec): min=10109, max=51282, avg=15446.90, stdev=1656.84 00:42:07.719 clat percentiles (usec): 00:42:07.719 | 1.00th=[12387], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:42:07.719 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:42:07.719 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:42:07.719 | 99.00th=[18220], 99.50th=[18482], 99.90th=[48497], 99.95th=[51119], 00:42:07.719 | 99.99th=[51119] 00:42:07.719 bw ( KiB/s): min=23808, max=27392, per=32.85%, avg=24896.00, stdev=968.16, samples=20 00:42:07.719 iops : min= 186, max= 214, avg=194.50, stdev= 7.56, samples=20 00:42:07.719 lat (msec) : 20=99.79%, 50=0.15%, 100=0.05% 00:42:07.719 cpu : usr=92.29%, sys=7.24%, ctx=27, majf=0, minf=113 00:42:07.719 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.719 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:07.719 filename0: (groupid=0, jobs=1): err= 0: pid=1131328: Tue Oct 1 01:59:45 2024 00:42:07.719 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10047msec) 00:42:07.719 slat (nsec): min=4462, max=47306, avg=15757.53, stdev=4851.55 00:42:07.719 clat (usec): min=8582, max=46691, avg=15069.55, stdev=1422.48 00:42:07.719 lat (usec): min=8595, max=46705, avg=15085.31, stdev=1422.73 00:42:07.719 clat percentiles (usec): 00:42:07.719 | 1.00th=[11600], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:42:07.719 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:42:07.719 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16581], 95.00th=[16909], 00:42:07.719 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[46924], 00:42:07.719 | 99.99th=[46924] 00:42:07.719 bw ( KiB/s): min=24320, max=27904, per=33.60%, avg=25459.20, stdev=1012.06, samples=20 00:42:07.719 iops : min= 190, max= 218, avg=198.90, stdev= 7.91, samples=20 00:42:07.719 lat (msec) : 10=0.35%, 20=99.60%, 50=0.05% 00:42:07.719 cpu : usr=92.15%, sys=7.38%, ctx=21, majf=0, minf=162 00:42:07.719 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.719 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:07.719 00:42:07.719 Run status group 0 (all jobs): 00:42:07.719 READ: bw=74.0MiB/s (77.6MB/s), 24.2MiB/s-25.0MiB/s (25.4MB/s-26.2MB/s), io=744MiB (780MB), run=10046-10047msec 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.719 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:07.720 00:42:07.720 real 0m11.221s 00:42:07.720 user 0m28.749s 00:42:07.720 sys 0m2.578s 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:07.720 01:59:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:07.720 ************************************ 00:42:07.720 END TEST fio_dif_digest 00:42:07.720 ************************************ 00:42:07.720 01:59:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:07.720 01:59:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:07.720 rmmod nvme_tcp 00:42:07.720 rmmod nvme_fabrics 00:42:07.720 rmmod nvme_keyring 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 1125290 ']' 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 1125290 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1125290 ']' 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1125290 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125290 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125290' 00:42:07.720 killing process with pid 1125290 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1125290 00:42:07.720 01:59:46 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1125290 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:07.720 01:59:46 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:07.720 Waiting for block devices as requested 00:42:07.720 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:07.978 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:07.978 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:07.978 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:08.235 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:08.235 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:08.235 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:08.235 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:08.235 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:08.494 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:08.494 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:08.494 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:08.753 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:08.753 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:08.753 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:08.753 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:09.011 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:09.011 01:59:48 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.011 01:59:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:09.011 01:59:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:11.543 01:59:50 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:11.543 00:42:11.543 real 1m7.317s 00:42:11.543 user 6m30.994s 00:42:11.543 sys 0m18.074s 00:42:11.543 01:59:50 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:11.543 01:59:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:11.543 ************************************ 00:42:11.543 END TEST nvmf_dif 00:42:11.543 ************************************ 00:42:11.543 01:59:50 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:11.543 01:59:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:11.543 01:59:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:11.543 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:42:11.543 ************************************ 00:42:11.543 START TEST nvmf_abort_qd_sizes 00:42:11.543 ************************************ 00:42:11.543 01:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:11.543 * Looking for test storage... 00:42:11.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:11.543 01:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:11.543 01:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:42:11.543 01:59:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:11.543 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:11.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.543 --rc genhtml_branch_coverage=1 00:42:11.543 --rc genhtml_function_coverage=1 00:42:11.543 --rc genhtml_legend=1 00:42:11.543 --rc geninfo_all_blocks=1 00:42:11.543 --rc geninfo_unexecuted_blocks=1 00:42:11.543 00:42:11.543 ' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.544 --rc genhtml_branch_coverage=1 00:42:11.544 --rc genhtml_function_coverage=1 00:42:11.544 --rc genhtml_legend=1 00:42:11.544 --rc geninfo_all_blocks=1 00:42:11.544 --rc geninfo_unexecuted_blocks=1 00:42:11.544 00:42:11.544 ' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.544 --rc genhtml_branch_coverage=1 00:42:11.544 --rc genhtml_function_coverage=1 00:42:11.544 --rc genhtml_legend=1 00:42:11.544 --rc geninfo_all_blocks=1 00:42:11.544 --rc geninfo_unexecuted_blocks=1 00:42:11.544 00:42:11.544 ' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.544 --rc genhtml_branch_coverage=1 00:42:11.544 --rc genhtml_function_coverage=1 00:42:11.544 --rc genhtml_legend=1 00:42:11.544 --rc geninfo_all_blocks=1 00:42:11.544 --rc geninfo_unexecuted_blocks=1 00:42:11.544 00:42:11.544 ' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:11.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:11.544 01:59:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:13.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:13.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:13.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:13.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:13.445 01:59:52 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:13.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:13.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:42:13.445 00:42:13.445 --- 10.0.0.2 ping statistics --- 00:42:13.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:13.445 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:13.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:13.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:42:13.445 00:42:13.445 --- 10.0.0.1 ping statistics --- 00:42:13.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:13.445 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:42:13.445 01:59:53 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:14.819 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:14.819 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:14.819 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:15.754 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=1136235 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 1136235 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1136235 ']' 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:15.754 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:15.754 [2024-10-01 01:59:55.507099] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:15.754 [2024-10-01 01:59:55.507177] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:15.754 [2024-10-01 01:59:55.577752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:16.012 [2024-10-01 01:59:55.670256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:16.012 [2024-10-01 01:59:55.670320] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:16.012 [2024-10-01 01:59:55.670337] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:16.012 [2024-10-01 01:59:55.670350] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:16.012 [2024-10-01 01:59:55.670361] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:16.012 [2024-10-01 01:59:55.670420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.012 [2024-10-01 01:59:55.670489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:16.012 [2024-10-01 01:59:55.670579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:16.012 [2024-10-01 01:59:55.670581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:16.012 01:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:16.012 ************************************ 00:42:16.012 START TEST spdk_target_abort 00:42:16.012 ************************************ 00:42:16.012 01:59:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:16.012 01:59:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:16.012 01:59:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:16.012 01:59:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.012 01:59:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:19.289 spdk_targetn1 00:42:19.289 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.289 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:19.289 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.289 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:19.289 [2024-10-01 01:59:58.695632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:19.289 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.289 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:19.290 [2024-10-01 01:59:58.727909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:19.290 01:59:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:22.568 Initializing NVMe Controllers 00:42:22.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:22.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:22.568 Initialization complete. Launching workers. 00:42:22.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11935, failed: 0 00:42:22.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 10681 00:42:22.568 success 775, unsuccessful 479, failed 0 00:42:22.568 02:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:22.568 02:00:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:25.930 Initializing NVMe Controllers 00:42:25.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:25.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:25.930 Initialization complete. Launching workers. 00:42:25.930 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8519, failed: 0 00:42:25.930 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7255 00:42:25.930 success 285, unsuccessful 979, failed 0 00:42:25.930 02:00:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:25.930 02:00:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:29.209 Initializing NVMe Controllers 00:42:29.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:29.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:29.209 Initialization complete. Launching workers. 00:42:29.209 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31271, failed: 0 00:42:29.209 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2706, failed to submit 28565 00:42:29.209 success 540, unsuccessful 2166, failed 0 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.209 02:00:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1136235 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1136235 ']' 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1136235 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136235 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136235' 00:42:30.142 killing process with pid 1136235 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1136235 00:42:30.142 02:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1136235 00:42:30.400 00:42:30.400 real 0m14.187s 00:42:30.400 user 0m53.774s 00:42:30.400 sys 0m2.538s 00:42:30.400 02:00:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:30.400 02:00:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:30.400 ************************************ 00:42:30.400 END TEST spdk_target_abort 00:42:30.400 ************************************ 00:42:30.400 02:00:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:30.400 02:00:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:30.401 02:00:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:30.401 02:00:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:30.401 ************************************ 00:42:30.401 START TEST kernel_target_abort 00:42:30.401 ************************************ 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:30.401 02:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:31.335 Waiting for block devices as requested 00:42:31.335 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:31.593 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:31.593 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:31.851 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:31.851 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:31.851 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:31.851 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:32.110 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:32.110 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:32.110 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:32.110 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:32.369 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:32.369 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:32.369 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:32.369 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:32.628 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:32.628 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:32.628 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:32.886 No valid GPT data, bailing 00:42:32.886 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:32.886 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:32.886 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:32.886 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:32.887 00:42:32.887 Discovery Log Number of Records 2, Generation counter 2 00:42:32.887 =====Discovery Log Entry 0====== 00:42:32.887 trtype: tcp 00:42:32.887 adrfam: ipv4 00:42:32.887 subtype: current discovery subsystem 00:42:32.887 treq: not specified, sq flow control disable supported 00:42:32.887 portid: 1 00:42:32.887 trsvcid: 4420 00:42:32.887 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:32.887 traddr: 10.0.0.1 00:42:32.887 eflags: none 00:42:32.887 sectype: none 00:42:32.887 =====Discovery Log Entry 1====== 00:42:32.887 trtype: tcp 00:42:32.887 adrfam: ipv4 00:42:32.887 subtype: nvme subsystem 00:42:32.887 treq: not specified, sq flow control disable supported 00:42:32.887 portid: 1 00:42:32.887 trsvcid: 4420 00:42:32.887 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:32.887 traddr: 10.0.0.1 00:42:32.887 eflags: none 00:42:32.887 sectype: none 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:32.887 02:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:36.166 Initializing NVMe Controllers 00:42:36.166 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:36.167 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:36.167 Initialization complete. Launching workers. 00:42:36.167 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38991, failed: 0 00:42:36.167 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38991, failed to submit 0 00:42:36.167 success 0, unsuccessful 38991, failed 0 00:42:36.167 02:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:36.167 02:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:39.443 Initializing NVMe Controllers 00:42:39.443 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:39.443 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:39.443 Initialization complete. Launching workers. 00:42:39.443 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69722, failed: 0 00:42:39.443 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17594, failed to submit 52128 00:42:39.443 success 0, unsuccessful 17594, failed 0 00:42:39.443 02:00:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:39.443 02:00:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:42.725 Initializing NVMe Controllers 00:42:42.725 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:42.725 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:42.725 Initialization complete. Launching workers. 00:42:42.725 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72604, failed: 0 00:42:42.725 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18134, failed to submit 54470 00:42:42.725 success 0, unsuccessful 18134, failed 0 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:42:42.725 02:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:42:42.725 02:00:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:43.663 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:43.663 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:43.663 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:44.599 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:44.599 00:42:44.599 real 0m14.254s 00:42:44.599 user 0m5.796s 00:42:44.599 sys 0m3.345s 00:42:44.599 02:00:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:44.599 02:00:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.599 ************************************ 00:42:44.599 END TEST kernel_target_abort 00:42:44.599 ************************************ 00:42:44.599 02:00:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:44.599 02:00:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:44.599 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:44.599 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:44.600 rmmod nvme_tcp 00:42:44.600 rmmod nvme_fabrics 00:42:44.600 rmmod nvme_keyring 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 1136235 ']' 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 1136235 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1136235 ']' 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1136235 00:42:44.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1136235) - No such process 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1136235 is not found' 00:42:44.600 Process with pid 1136235 is not found 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:42:44.600 02:00:24 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:45.974 Waiting for block devices as requested 00:42:45.974 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:45.974 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:45.974 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:46.232 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:46.232 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:46.232 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:46.232 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:46.232 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:46.491 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:46.491 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:46.491 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:46.491 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:46.750 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:46.750 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:46.750 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:46.750 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:47.008 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:47.008 02:00:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:49.540 02:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:49.540 00:42:49.540 real 0m37.940s 00:42:49.540 user 1m1.764s 00:42:49.540 sys 0m9.336s 00:42:49.540 02:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:49.540 02:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:49.540 ************************************ 00:42:49.540 END TEST nvmf_abort_qd_sizes 00:42:49.540 ************************************ 00:42:49.540 02:00:28 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:49.540 02:00:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:49.540 02:00:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:49.540 02:00:28 -- common/autotest_common.sh@10 -- # set +x 00:42:49.540 ************************************ 00:42:49.540 START TEST keyring_file 00:42:49.540 ************************************ 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:49.540 * Looking for test storage... 00:42:49.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:49.540 02:00:28 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.540 --rc genhtml_branch_coverage=1 00:42:49.540 --rc genhtml_function_coverage=1 00:42:49.540 --rc genhtml_legend=1 00:42:49.540 --rc geninfo_all_blocks=1 00:42:49.540 --rc geninfo_unexecuted_blocks=1 00:42:49.540 00:42:49.540 ' 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.540 --rc genhtml_branch_coverage=1 00:42:49.540 --rc genhtml_function_coverage=1 00:42:49.540 --rc genhtml_legend=1 00:42:49.540 --rc geninfo_all_blocks=1 00:42:49.540 --rc geninfo_unexecuted_blocks=1 00:42:49.540 00:42:49.540 ' 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.540 --rc genhtml_branch_coverage=1 00:42:49.540 --rc genhtml_function_coverage=1 00:42:49.540 --rc genhtml_legend=1 00:42:49.540 --rc geninfo_all_blocks=1 00:42:49.540 --rc geninfo_unexecuted_blocks=1 00:42:49.540 00:42:49.540 ' 00:42:49.540 02:00:28 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.540 --rc genhtml_branch_coverage=1 00:42:49.540 --rc genhtml_function_coverage=1 00:42:49.540 --rc genhtml_legend=1 00:42:49.540 --rc geninfo_all_blocks=1 00:42:49.540 --rc geninfo_unexecuted_blocks=1 00:42:49.540 00:42:49.540 ' 00:42:49.540 02:00:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:49.540 02:00:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:49.540 02:00:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:49.541 02:00:28 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:49.541 02:00:28 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:49.541 02:00:28 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:49.541 02:00:28 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:49.541 02:00:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.541 02:00:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.541 02:00:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.541 02:00:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:49.541 02:00:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:49.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:49.541 02:00:28 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:49.541 02:00:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:49.541 02:00:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:49.541 02:00:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:49.541 02:00:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:49.541 02:00:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:49.541 02:00:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.g4TzrWA27O 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.g4TzrWA27O 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.g4TzrWA27O 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.g4TzrWA27O 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1PuIIOBvWZ 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:49.541 02:00:29 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1PuIIOBvWZ 00:42:49.541 02:00:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1PuIIOBvWZ 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1PuIIOBvWZ 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=1142629 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:49.541 02:00:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1142629 00:42:49.541 02:00:29 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1142629 ']' 00:42:49.541 02:00:29 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:49.541 02:00:29 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:49.541 02:00:29 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:49.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:49.541 02:00:29 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:49.541 02:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:49.541 [2024-10-01 02:00:29.138421] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:49.541 [2024-10-01 02:00:29.138510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142629 ] 00:42:49.541 [2024-10-01 02:00:29.203271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.541 [2024-10-01 02:00:29.298489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:49.799 02:00:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:49.799 [2024-10-01 02:00:29.565180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:49.799 null0 00:42:49.799 [2024-10-01 02:00:29.597242] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:49.799 [2024-10-01 02:00:29.597743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:49.799 02:00:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:49.799 [2024-10-01 02:00:29.621301] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:49.799 request: 00:42:49.799 { 00:42:49.799 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:49.799 "secure_channel": false, 00:42:49.799 "listen_address": { 00:42:49.799 "trtype": "tcp", 00:42:49.799 "traddr": "127.0.0.1", 00:42:49.799 "trsvcid": "4420" 00:42:49.799 }, 00:42:49.799 "method": "nvmf_subsystem_add_listener", 00:42:49.799 "req_id": 1 00:42:49.799 } 00:42:49.799 Got JSON-RPC error response 00:42:49.799 response: 00:42:49.799 { 00:42:49.799 "code": -32602, 00:42:49.799 "message": "Invalid parameters" 00:42:49.799 } 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:49.799 02:00:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=1142644 00:42:49.799 02:00:29 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:49.799 02:00:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1142644 /var/tmp/bperf.sock 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1142644 ']' 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:49.799 02:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:50.058 [2024-10-01 02:00:29.672662] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:42:50.058 [2024-10-01 02:00:29.672740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142644 ] 00:42:50.058 [2024-10-01 02:00:29.734460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.058 [2024-10-01 02:00:29.825395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:50.316 02:00:29 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:50.316 02:00:29 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:50.316 02:00:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:50.316 02:00:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:50.574 02:00:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1PuIIOBvWZ 00:42:50.574 02:00:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1PuIIOBvWZ 00:42:50.832 02:00:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:50.832 02:00:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:50.832 02:00:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:50.832 02:00:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:50.832 02:00:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:51.089 02:00:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.g4TzrWA27O == \/\t\m\p\/\t\m\p\.\g\4\T\z\r\W\A\2\7\O ]] 00:42:51.089 02:00:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:51.089 02:00:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:51.089 02:00:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.089 02:00:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.089 02:00:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:51.346 02:00:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.1PuIIOBvWZ == \/\t\m\p\/\t\m\p\.\1\P\u\I\I\O\B\v\W\Z ]] 00:42:51.346 02:00:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:51.346 02:00:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:51.346 02:00:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.346 02:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.346 02:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.346 02:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:51.604 02:00:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:51.604 02:00:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:51.604 02:00:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:51.604 02:00:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.604 02:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.604 02:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.604 02:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:51.861 02:00:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:51.861 02:00:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:51.861 02:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:52.119 [2024-10-01 02:00:31.846683] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:52.119 nvme0n1 00:42:52.119 02:00:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:52.119 02:00:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:52.119 02:00:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.119 02:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.119 02:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.119 02:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:52.377 02:00:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:52.377 02:00:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:52.377 02:00:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:52.377 02:00:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.377 02:00:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.377 02:00:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:52.377 02:00:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.635 02:00:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:52.635 02:00:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:52.893 Running I/O for 1 seconds... 00:42:53.827 7084.00 IOPS, 27.67 MiB/s 00:42:53.827 Latency(us) 00:42:53.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.827 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:53.827 nvme0n1 : 1.01 7134.72 27.87 0.00 0.00 17871.78 9029.40 31457.28 00:42:53.827 =================================================================================================================== 00:42:53.827 Total : 7134.72 27.87 0.00 0.00 17871.78 9029.40 31457.28 00:42:53.827 { 00:42:53.827 "results": [ 00:42:53.827 { 00:42:53.827 "job": "nvme0n1", 00:42:53.827 "core_mask": "0x2", 00:42:53.827 "workload": "randrw", 00:42:53.827 "percentage": 50, 00:42:53.827 "status": "finished", 00:42:53.827 "queue_depth": 128, 00:42:53.827 "io_size": 4096, 00:42:53.827 "runtime": 1.010971, 00:42:53.827 "iops": 7134.7249327626605, 00:42:53.827 "mibps": 27.870019268604143, 00:42:53.827 "io_failed": 0, 00:42:53.827 "io_timeout": 0, 00:42:53.827 "avg_latency_us": 17871.775657737315, 00:42:53.828 "min_latency_us": 9029.404444444444, 00:42:53.828 "max_latency_us": 31457.28 00:42:53.828 } 00:42:53.828 ], 00:42:53.828 "core_count": 1 00:42:53.828 } 00:42:53.828 02:00:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:53.828 02:00:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:54.086 02:00:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:54.086 02:00:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:54.086 02:00:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:54.086 02:00:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:54.086 02:00:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.086 02:00:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:54.345 02:00:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:54.345 02:00:34 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:54.345 02:00:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:54.345 02:00:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:54.345 02:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:54.345 02:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.345 02:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:54.603 02:00:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:54.603 02:00:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:54.603 02:00:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:54.603 02:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:54.861 [2024-10-01 02:00:34.699836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:54.861 [2024-10-01 02:00:34.700404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe110 (107): Transport endpoint is not connected 00:42:54.861 [2024-10-01 02:00:34.701391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe110 (9): Bad file descriptor 00:42:54.861 [2024-10-01 02:00:34.702388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:54.861 [2024-10-01 02:00:34.702417] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:54.861 [2024-10-01 02:00:34.702443] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:54.861 [2024-10-01 02:00:34.702473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:54.861 request: 00:42:54.862 { 00:42:54.862 "name": "nvme0", 00:42:54.862 "trtype": "tcp", 00:42:54.862 "traddr": "127.0.0.1", 00:42:54.862 "adrfam": "ipv4", 00:42:54.862 "trsvcid": "4420", 00:42:54.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:54.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:54.862 "prchk_reftag": false, 00:42:54.862 "prchk_guard": false, 00:42:54.862 "hdgst": false, 00:42:54.862 "ddgst": false, 00:42:54.862 "psk": "key1", 00:42:54.862 "allow_unrecognized_csi": false, 00:42:54.862 "method": "bdev_nvme_attach_controller", 00:42:54.862 "req_id": 1 00:42:54.862 } 00:42:54.862 Got JSON-RPC error response 00:42:54.862 response: 00:42:54.862 { 00:42:54.862 "code": -5, 00:42:54.862 "message": "Input/output error" 00:42:54.862 } 00:42:55.120 02:00:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:55.120 02:00:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:55.120 02:00:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:55.120 02:00:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:55.120 02:00:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:55.120 02:00:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:55.120 02:00:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.120 02:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.120 02:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.120 02:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:55.378 02:00:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:55.378 02:00:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:55.378 02:00:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:55.378 02:00:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.378 02:00:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.378 02:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.378 02:00:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:55.636 02:00:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:55.636 02:00:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:55.636 02:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:55.895 02:00:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:55.895 02:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:56.211 02:00:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:56.211 02:00:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:56.211 02:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.499 02:00:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:56.499 02:00:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.g4TzrWA27O 00:42:56.499 02:00:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.499 02:00:36 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:56.499 02:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:56.499 [2024-10-01 02:00:36.348236] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.g4TzrWA27O': 0100660 00:42:56.499 [2024-10-01 02:00:36.348271] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:56.756 request: 00:42:56.756 { 00:42:56.756 "name": "key0", 00:42:56.756 "path": "/tmp/tmp.g4TzrWA27O", 00:42:56.756 "method": "keyring_file_add_key", 00:42:56.756 "req_id": 1 00:42:56.756 } 00:42:56.756 Got JSON-RPC error response 00:42:56.756 response: 00:42:56.756 { 00:42:56.756 "code": -1, 00:42:56.756 "message": "Operation not permitted" 00:42:56.756 } 00:42:56.756 02:00:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:56.756 02:00:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:56.756 02:00:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:56.756 02:00:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:56.756 02:00:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.g4TzrWA27O 00:42:56.756 02:00:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:56.756 02:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g4TzrWA27O 00:42:57.014 02:00:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.g4TzrWA27O 00:42:57.014 02:00:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:57.014 02:00:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:57.014 02:00:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:57.014 02:00:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.014 02:00:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:57.014 02:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.273 02:00:36 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:57.273 02:00:36 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:57.273 02:00:36 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:57.273 02:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:57.531 [2024-10-01 02:00:37.186541] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.g4TzrWA27O': No such file or directory 00:42:57.531 [2024-10-01 02:00:37.186582] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:57.531 [2024-10-01 02:00:37.186614] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:57.531 [2024-10-01 02:00:37.186634] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:57.531 [2024-10-01 02:00:37.186653] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:57.531 [2024-10-01 02:00:37.186670] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:57.531 request: 00:42:57.531 { 00:42:57.531 "name": "nvme0", 00:42:57.531 "trtype": "tcp", 00:42:57.531 "traddr": "127.0.0.1", 00:42:57.531 "adrfam": "ipv4", 00:42:57.531 "trsvcid": "4420", 00:42:57.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:57.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:57.531 "prchk_reftag": false, 00:42:57.531 "prchk_guard": false, 00:42:57.531 "hdgst": false, 00:42:57.531 "ddgst": false, 00:42:57.531 "psk": "key0", 00:42:57.531 "allow_unrecognized_csi": false, 00:42:57.531 "method": "bdev_nvme_attach_controller", 00:42:57.531 "req_id": 1 00:42:57.531 } 00:42:57.531 Got JSON-RPC error response 00:42:57.531 response: 00:42:57.531 { 00:42:57.531 "code": -19, 00:42:57.531 "message": "No such device" 00:42:57.531 } 00:42:57.531 02:00:37 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:57.531 02:00:37 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:57.531 02:00:37 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:57.531 02:00:37 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:57.531 02:00:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:57.531 02:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:57.789 02:00:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.a3kBDbbOqR 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:57.789 02:00:37 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:57.789 02:00:37 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:42:57.789 02:00:37 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:42:57.789 02:00:37 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:42:57.789 02:00:37 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:42:57.789 02:00:37 keyring_file -- nvmf/common.sh@729 -- # python - 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.a3kBDbbOqR 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.a3kBDbbOqR 00:42:57.789 02:00:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.a3kBDbbOqR 00:42:57.789 02:00:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.a3kBDbbOqR 00:42:57.789 02:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.a3kBDbbOqR 00:42:58.047 02:00:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.047 02:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.305 nvme0n1 00:42:58.305 02:00:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:58.305 02:00:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:58.305 02:00:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:58.305 02:00:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.305 02:00:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.305 02:00:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.563 02:00:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:58.563 02:00:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:58.563 02:00:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:58.822 02:00:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:58.822 02:00:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:58.822 02:00:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.822 02:00:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.822 02:00:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.386 02:00:38 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:59.386 02:00:38 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:59.386 02:00:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:59.386 02:00:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:59.386 02:00:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.386 02:00:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:59.386 02:00:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.386 02:00:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:59.386 02:00:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:59.386 02:00:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:59.643 02:00:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:59.643 02:00:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:59.643 02:00:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.209 02:00:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:00.209 02:00:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.a3kBDbbOqR 00:43:00.209 02:00:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.a3kBDbbOqR 00:43:00.209 02:00:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1PuIIOBvWZ 00:43:00.209 02:00:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1PuIIOBvWZ 00:43:00.467 02:00:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:00.467 02:00:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:01.032 nvme0n1 00:43:01.032 02:00:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:01.032 02:00:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:01.290 02:00:40 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:01.290 "subsystems": [ 00:43:01.290 { 00:43:01.290 "subsystem": "keyring", 00:43:01.290 "config": [ 00:43:01.290 { 00:43:01.290 "method": "keyring_file_add_key", 00:43:01.290 "params": { 00:43:01.290 "name": "key0", 00:43:01.290 "path": "/tmp/tmp.a3kBDbbOqR" 00:43:01.290 } 00:43:01.290 }, 00:43:01.290 { 00:43:01.290 "method": "keyring_file_add_key", 00:43:01.290 "params": { 00:43:01.290 "name": "key1", 00:43:01.290 "path": "/tmp/tmp.1PuIIOBvWZ" 00:43:01.290 } 00:43:01.290 } 00:43:01.290 ] 00:43:01.290 }, 00:43:01.290 { 00:43:01.290 "subsystem": "iobuf", 00:43:01.290 "config": [ 00:43:01.290 { 00:43:01.290 "method": "iobuf_set_options", 00:43:01.290 "params": { 00:43:01.290 "small_pool_count": 8192, 00:43:01.290 "large_pool_count": 1024, 00:43:01.290 "small_bufsize": 8192, 00:43:01.290 "large_bufsize": 135168 00:43:01.290 } 00:43:01.290 } 00:43:01.290 ] 00:43:01.290 }, 00:43:01.290 { 00:43:01.290 "subsystem": "sock", 00:43:01.290 "config": [ 00:43:01.290 { 00:43:01.290 "method": "sock_set_default_impl", 00:43:01.290 "params": { 00:43:01.290 "impl_name": "posix" 00:43:01.290 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "sock_impl_set_options", 00:43:01.291 "params": { 00:43:01.291 "impl_name": "ssl", 00:43:01.291 "recv_buf_size": 4096, 00:43:01.291 "send_buf_size": 4096, 00:43:01.291 "enable_recv_pipe": true, 00:43:01.291 "enable_quickack": false, 00:43:01.291 "enable_placement_id": 0, 00:43:01.291 "enable_zerocopy_send_server": true, 00:43:01.291 "enable_zerocopy_send_client": false, 00:43:01.291 "zerocopy_threshold": 0, 00:43:01.291 "tls_version": 0, 00:43:01.291 "enable_ktls": false 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "sock_impl_set_options", 00:43:01.291 "params": { 00:43:01.291 "impl_name": "posix", 00:43:01.291 "recv_buf_size": 2097152, 00:43:01.291 "send_buf_size": 2097152, 00:43:01.291 "enable_recv_pipe": true, 00:43:01.291 "enable_quickack": false, 00:43:01.291 "enable_placement_id": 0, 00:43:01.291 "enable_zerocopy_send_server": true, 00:43:01.291 "enable_zerocopy_send_client": false, 00:43:01.291 "zerocopy_threshold": 0, 00:43:01.291 "tls_version": 0, 00:43:01.291 "enable_ktls": false 00:43:01.291 } 00:43:01.291 } 00:43:01.291 ] 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "subsystem": "vmd", 00:43:01.291 "config": [] 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "subsystem": "accel", 00:43:01.291 "config": [ 00:43:01.291 { 00:43:01.291 "method": "accel_set_options", 00:43:01.291 "params": { 00:43:01.291 "small_cache_size": 128, 00:43:01.291 "large_cache_size": 16, 00:43:01.291 "task_count": 2048, 00:43:01.291 "sequence_count": 2048, 00:43:01.291 "buf_count": 2048 00:43:01.291 } 00:43:01.291 } 00:43:01.291 ] 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "subsystem": "bdev", 00:43:01.291 "config": [ 00:43:01.291 { 00:43:01.291 "method": "bdev_set_options", 00:43:01.291 "params": { 00:43:01.291 "bdev_io_pool_size": 65535, 00:43:01.291 "bdev_io_cache_size": 256, 00:43:01.291 "bdev_auto_examine": true, 00:43:01.291 "iobuf_small_cache_size": 128, 00:43:01.291 "iobuf_large_cache_size": 16 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "bdev_raid_set_options", 00:43:01.291 "params": { 00:43:01.291 "process_window_size_kb": 1024, 00:43:01.291 "process_max_bandwidth_mb_sec": 0 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "bdev_iscsi_set_options", 00:43:01.291 "params": { 00:43:01.291 "timeout_sec": 30 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "bdev_nvme_set_options", 00:43:01.291 "params": { 00:43:01.291 "action_on_timeout": "none", 00:43:01.291 "timeout_us": 0, 00:43:01.291 "timeout_admin_us": 0, 00:43:01.291 "keep_alive_timeout_ms": 10000, 00:43:01.291 "arbitration_burst": 0, 00:43:01.291 "low_priority_weight": 0, 00:43:01.291 "medium_priority_weight": 0, 00:43:01.291 "high_priority_weight": 0, 00:43:01.291 "nvme_adminq_poll_period_us": 10000, 00:43:01.291 "nvme_ioq_poll_period_us": 0, 00:43:01.291 "io_queue_requests": 512, 00:43:01.291 "delay_cmd_submit": true, 00:43:01.291 "transport_retry_count": 4, 00:43:01.291 "bdev_retry_count": 3, 00:43:01.291 "transport_ack_timeout": 0, 00:43:01.291 "ctrlr_loss_timeout_sec": 0, 00:43:01.291 "reconnect_delay_sec": 0, 00:43:01.291 "fast_io_fail_timeout_sec": 0, 00:43:01.291 "disable_auto_failback": false, 00:43:01.291 "generate_uuids": false, 00:43:01.291 "transport_tos": 0, 00:43:01.291 "nvme_error_stat": false, 00:43:01.291 "rdma_srq_size": 0, 00:43:01.291 "io_path_stat": false, 00:43:01.291 "allow_accel_sequence": false, 00:43:01.291 "rdma_max_cq_size": 0, 00:43:01.291 "rdma_cm_event_timeout_ms": 0, 00:43:01.291 "dhchap_digests": [ 00:43:01.291 "sha256", 00:43:01.291 "sha384", 00:43:01.291 "sha512" 00:43:01.291 ], 00:43:01.291 "dhchap_dhgroups": [ 00:43:01.291 "null", 00:43:01.291 "ffdhe2048", 00:43:01.291 "ffdhe3072", 00:43:01.291 "ffdhe4096", 00:43:01.291 "ffdhe6144", 00:43:01.291 "ffdhe8192" 00:43:01.291 ] 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "bdev_nvme_attach_controller", 00:43:01.291 "params": { 00:43:01.291 "name": "nvme0", 00:43:01.291 "trtype": "TCP", 00:43:01.291 "adrfam": "IPv4", 00:43:01.291 "traddr": "127.0.0.1", 00:43:01.291 "trsvcid": "4420", 00:43:01.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:01.291 "prchk_reftag": false, 00:43:01.291 "prchk_guard": false, 00:43:01.291 "ctrlr_loss_timeout_sec": 0, 00:43:01.291 "reconnect_delay_sec": 0, 00:43:01.291 "fast_io_fail_timeout_sec": 0, 00:43:01.291 "psk": "key0", 00:43:01.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:01.291 "hdgst": false, 00:43:01.291 "ddgst": false 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "bdev_nvme_set_hotplug", 00:43:01.291 "params": { 00:43:01.291 "period_us": 100000, 00:43:01.291 "enable": false 00:43:01.291 } 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "method": "bdev_wait_for_examine" 00:43:01.291 } 00:43:01.291 ] 00:43:01.291 }, 00:43:01.291 { 00:43:01.291 "subsystem": "nbd", 00:43:01.291 "config": [] 00:43:01.291 } 00:43:01.291 ] 00:43:01.291 }' 00:43:01.291 02:00:40 keyring_file -- keyring/file.sh@115 -- # killprocess 1142644 00:43:01.291 02:00:40 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1142644 ']' 00:43:01.291 02:00:40 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1142644 00:43:01.291 02:00:40 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:01.291 02:00:40 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:01.291 02:00:40 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1142644 00:43:01.291 02:00:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:01.291 02:00:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:01.291 02:00:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1142644' 00:43:01.291 killing process with pid 1142644 00:43:01.291 02:00:41 keyring_file -- common/autotest_common.sh@969 -- # kill 1142644 00:43:01.291 Received shutdown signal, test time was about 1.000000 seconds 00:43:01.291 00:43:01.291 Latency(us) 00:43:01.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:01.291 =================================================================================================================== 00:43:01.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:01.291 02:00:41 keyring_file -- common/autotest_common.sh@974 -- # wait 1142644 00:43:01.550 02:00:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=1144107 00:43:01.550 02:00:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1144107 /var/tmp/bperf.sock 00:43:01.550 02:00:41 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1144107 ']' 00:43:01.550 02:00:41 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:01.550 02:00:41 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:01.550 02:00:41 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:01.550 02:00:41 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:01.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:01.550 02:00:41 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:01.550 02:00:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:01.550 02:00:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:01.550 "subsystems": [ 00:43:01.550 { 00:43:01.550 "subsystem": "keyring", 00:43:01.550 "config": [ 00:43:01.550 { 00:43:01.550 "method": "keyring_file_add_key", 00:43:01.550 "params": { 00:43:01.550 "name": "key0", 00:43:01.550 "path": "/tmp/tmp.a3kBDbbOqR" 00:43:01.550 } 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "method": "keyring_file_add_key", 00:43:01.550 "params": { 00:43:01.550 "name": "key1", 00:43:01.550 "path": "/tmp/tmp.1PuIIOBvWZ" 00:43:01.550 } 00:43:01.550 } 00:43:01.550 ] 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "subsystem": "iobuf", 00:43:01.550 "config": [ 00:43:01.550 { 00:43:01.550 "method": "iobuf_set_options", 00:43:01.550 "params": { 00:43:01.550 "small_pool_count": 8192, 00:43:01.550 "large_pool_count": 1024, 00:43:01.550 "small_bufsize": 8192, 00:43:01.550 "large_bufsize": 135168 00:43:01.550 } 00:43:01.550 } 00:43:01.550 ] 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "subsystem": "sock", 00:43:01.550 "config": [ 00:43:01.550 { 00:43:01.550 "method": "sock_set_default_impl", 00:43:01.550 "params": { 00:43:01.550 "impl_name": "posix" 00:43:01.550 } 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "method": "sock_impl_set_options", 00:43:01.550 "params": { 00:43:01.550 "impl_name": "ssl", 00:43:01.550 "recv_buf_size": 4096, 00:43:01.550 "send_buf_size": 4096, 00:43:01.550 "enable_recv_pipe": true, 00:43:01.550 "enable_quickack": false, 00:43:01.550 "enable_placement_id": 0, 00:43:01.550 "enable_zerocopy_send_server": true, 00:43:01.550 "enable_zerocopy_send_client": false, 00:43:01.550 "zerocopy_threshold": 0, 00:43:01.550 "tls_version": 0, 00:43:01.550 "enable_ktls": false 00:43:01.550 } 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "method": "sock_impl_set_options", 00:43:01.550 "params": { 00:43:01.550 "impl_name": "posix", 00:43:01.550 "recv_buf_size": 2097152, 00:43:01.550 "send_buf_size": 2097152, 00:43:01.550 "enable_recv_pipe": true, 00:43:01.550 "enable_quickack": false, 00:43:01.550 "enable_placement_id": 0, 00:43:01.550 "enable_zerocopy_send_server": true, 00:43:01.550 "enable_zerocopy_send_client": false, 00:43:01.550 "zerocopy_threshold": 0, 00:43:01.550 "tls_version": 0, 00:43:01.550 "enable_ktls": false 00:43:01.550 } 00:43:01.550 } 00:43:01.550 ] 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "subsystem": "vmd", 00:43:01.550 "config": [] 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "subsystem": "accel", 00:43:01.550 "config": [ 00:43:01.550 { 00:43:01.550 "method": "accel_set_options", 00:43:01.550 "params": { 00:43:01.550 "small_cache_size": 128, 00:43:01.550 "large_cache_size": 16, 00:43:01.550 "task_count": 2048, 00:43:01.550 "sequence_count": 2048, 00:43:01.550 "buf_count": 2048 00:43:01.550 } 00:43:01.550 } 00:43:01.550 ] 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "subsystem": "bdev", 00:43:01.550 "config": [ 00:43:01.550 { 00:43:01.550 "method": "bdev_set_options", 00:43:01.550 "params": { 00:43:01.550 "bdev_io_pool_size": 65535, 00:43:01.550 "bdev_io_cache_size": 256, 00:43:01.550 "bdev_auto_examine": true, 00:43:01.550 "iobuf_small_cache_size": 128, 00:43:01.550 "iobuf_large_cache_size": 16 00:43:01.550 } 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "method": "bdev_raid_set_options", 00:43:01.550 "params": { 00:43:01.550 "process_window_size_kb": 1024, 00:43:01.550 "process_max_bandwidth_mb_sec": 0 00:43:01.550 } 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "method": "bdev_iscsi_set_options", 00:43:01.550 "params": { 00:43:01.550 "timeout_sec": 30 00:43:01.550 } 00:43:01.550 }, 00:43:01.550 { 00:43:01.550 "method": "bdev_nvme_set_options", 00:43:01.550 "params": { 00:43:01.550 "action_on_timeout": "none", 00:43:01.550 "timeout_us": 0, 00:43:01.550 "timeout_admin_us": 0, 00:43:01.550 "keep_alive_timeout_ms": 10000, 00:43:01.550 "arbitration_burst": 0, 00:43:01.550 "low_priority_weight": 0, 00:43:01.550 "medium_priority_weight": 0, 00:43:01.550 "high_priority_weight": 0, 00:43:01.550 "nvme_adminq_poll_period_us": 10000, 00:43:01.550 "nvme_ioq_poll_period_us": 0, 00:43:01.550 "io_queue_requests": 512, 00:43:01.550 "delay_cmd_submit": true, 00:43:01.550 "transport_retry_count": 4, 00:43:01.550 "bdev_retry_count": 3, 00:43:01.551 "transport_ack_timeout": 0, 00:43:01.551 "ctrlr_loss_timeout_sec": 0, 00:43:01.551 "reconnect_delay_sec": 0, 00:43:01.551 "fast_io_fail_timeout_sec": 0, 00:43:01.551 "disable_auto_failback": false, 00:43:01.551 "generate_uuids": false, 00:43:01.551 "transport_tos": 0, 00:43:01.551 "nvme_error_stat": false, 00:43:01.551 "rdma_srq_size": 0, 00:43:01.551 "io_path_stat": false, 00:43:01.551 "allow_accel_sequence": false, 00:43:01.551 "rdma_max_cq_size": 0, 00:43:01.551 "rdma_cm_event_timeout_ms": 0, 00:43:01.551 "dhchap_digests": [ 00:43:01.551 "sha256", 00:43:01.551 "sha384", 00:43:01.551 "sha512" 00:43:01.551 ], 00:43:01.551 "dhchap_dhgroups": [ 00:43:01.551 "null", 00:43:01.551 "ffdhe2048", 00:43:01.551 "ffdhe3072", 00:43:01.551 "ffdhe4096", 00:43:01.551 "ffdhe6144", 00:43:01.551 "ffdhe8192" 00:43:01.551 ] 00:43:01.551 } 00:43:01.551 }, 00:43:01.551 { 00:43:01.551 "method": "bdev_nvme_attach_controller", 00:43:01.551 "params": { 00:43:01.551 "name": "nvme0", 00:43:01.551 "trtype": "TCP", 00:43:01.551 "adrfam": "IPv4", 00:43:01.551 "traddr": "127.0.0.1", 00:43:01.551 "trsvcid": "4420", 00:43:01.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:01.551 "prchk_reftag": false, 00:43:01.551 "prchk_guard": false, 00:43:01.551 "ctrlr_loss_timeout_sec": 0, 00:43:01.551 "reconnect_delay_sec": 0, 00:43:01.551 "fast_io_fail_timeout_sec": 0, 00:43:01.551 "psk": "key0", 00:43:01.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:01.551 "hdgst": false, 00:43:01.551 "ddgst": false 00:43:01.551 } 00:43:01.551 }, 00:43:01.551 { 00:43:01.551 "method": "bdev_nvme_set_hotplug", 00:43:01.551 "params": { 00:43:01.551 "period_us": 100000, 00:43:01.551 "enable": false 00:43:01.551 } 00:43:01.551 }, 00:43:01.551 { 00:43:01.551 "method": "bdev_wait_for_examine" 00:43:01.551 } 00:43:01.551 ] 00:43:01.551 }, 00:43:01.551 { 00:43:01.551 "subsystem": "nbd", 00:43:01.551 "config": [] 00:43:01.551 } 00:43:01.551 ] 00:43:01.551 }' 00:43:01.551 [2024-10-01 02:00:41.264452] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:43:01.551 [2024-10-01 02:00:41.264527] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144107 ] 00:43:01.551 [2024-10-01 02:00:41.322875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:01.809 [2024-10-01 02:00:41.410657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:01.809 [2024-10-01 02:00:41.600231] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:02.066 02:00:41 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:02.066 02:00:41 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:43:02.066 02:00:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:02.066 02:00:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:02.066 02:00:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.323 02:00:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:02.324 02:00:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:02.324 02:00:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:02.324 02:00:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.324 02:00:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.324 02:00:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:02.324 02:00:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.581 02:00:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:02.581 02:00:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:02.581 02:00:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:02.581 02:00:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:02.581 02:00:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:02.581 02:00:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:02.581 02:00:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.839 02:00:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:02.839 02:00:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:02.839 02:00:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:02.839 02:00:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:03.097 02:00:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:03.097 02:00:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:03.097 02:00:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.a3kBDbbOqR /tmp/tmp.1PuIIOBvWZ 00:43:03.097 02:00:42 keyring_file -- keyring/file.sh@20 -- # killprocess 1144107 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1144107 ']' 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1144107 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1144107 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1144107' 00:43:03.097 killing process with pid 1144107 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@969 -- # kill 1144107 00:43:03.097 Received shutdown signal, test time was about 1.000000 seconds 00:43:03.097 00:43:03.097 Latency(us) 00:43:03.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.097 =================================================================================================================== 00:43:03.097 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:03.097 02:00:42 keyring_file -- common/autotest_common.sh@974 -- # wait 1144107 00:43:03.354 02:00:43 keyring_file -- keyring/file.sh@21 -- # killprocess 1142629 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1142629 ']' 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1142629 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@955 -- # uname 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1142629 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1142629' 00:43:03.354 killing process with pid 1142629 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@969 -- # kill 1142629 00:43:03.354 02:00:43 keyring_file -- common/autotest_common.sh@974 -- # wait 1142629 00:43:03.919 00:43:03.919 real 0m14.637s 00:43:03.919 user 0m36.699s 00:43:03.919 sys 0m3.398s 00:43:03.919 02:00:43 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:03.919 02:00:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:03.919 ************************************ 00:43:03.919 END TEST keyring_file 00:43:03.919 ************************************ 00:43:03.919 02:00:43 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:43:03.919 02:00:43 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:03.919 02:00:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:03.919 02:00:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:03.919 02:00:43 -- common/autotest_common.sh@10 -- # set +x 00:43:03.919 ************************************ 00:43:03.919 START TEST keyring_linux 00:43:03.919 ************************************ 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:03.919 Joined session keyring: 296620299 00:43:03.919 * Looking for test storage... 00:43:03.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:03.919 02:00:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:03.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.919 --rc genhtml_branch_coverage=1 00:43:03.919 --rc genhtml_function_coverage=1 00:43:03.919 --rc genhtml_legend=1 00:43:03.919 --rc geninfo_all_blocks=1 00:43:03.919 --rc geninfo_unexecuted_blocks=1 00:43:03.919 00:43:03.919 ' 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:03.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.919 --rc genhtml_branch_coverage=1 00:43:03.919 --rc genhtml_function_coverage=1 00:43:03.919 --rc genhtml_legend=1 00:43:03.919 --rc geninfo_all_blocks=1 00:43:03.919 --rc geninfo_unexecuted_blocks=1 00:43:03.919 00:43:03.919 ' 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:03.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.919 --rc genhtml_branch_coverage=1 00:43:03.919 --rc genhtml_function_coverage=1 00:43:03.919 --rc genhtml_legend=1 00:43:03.919 --rc geninfo_all_blocks=1 00:43:03.919 --rc geninfo_unexecuted_blocks=1 00:43:03.919 00:43:03.919 ' 00:43:03.919 02:00:43 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:03.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.919 --rc genhtml_branch_coverage=1 00:43:03.919 --rc genhtml_function_coverage=1 00:43:03.919 --rc genhtml_legend=1 00:43:03.919 --rc geninfo_all_blocks=1 00:43:03.919 --rc geninfo_unexecuted_blocks=1 00:43:03.919 00:43:03.919 ' 00:43:03.919 02:00:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:03.919 02:00:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:03.919 02:00:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:03.919 02:00:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:03.920 02:00:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:03.920 02:00:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:03.920 02:00:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:03.920 02:00:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:03.920 02:00:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.920 02:00:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.920 02:00:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.920 02:00:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:03.920 02:00:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:03.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:03.920 /tmp/:spdk-test:key0 00:43:03.920 02:00:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:43:03.920 02:00:43 keyring_linux -- nvmf/common.sh@729 -- # python - 00:43:03.920 02:00:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:04.179 02:00:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:04.179 /tmp/:spdk-test:key1 00:43:04.179 02:00:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1144566 00:43:04.179 02:00:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:04.179 02:00:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1144566 00:43:04.179 02:00:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1144566 ']' 00:43:04.179 02:00:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:04.179 02:00:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:04.179 02:00:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:04.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:04.179 02:00:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:04.179 02:00:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:04.179 [2024-10-01 02:00:43.828744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:43:04.179 [2024-10-01 02:00:43.828846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144566 ] 00:43:04.179 [2024-10-01 02:00:43.892623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.179 [2024-10-01 02:00:43.982647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:04.437 02:00:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:04.437 02:00:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:04.437 02:00:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:04.437 02:00:44 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:04.437 02:00:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:04.438 [2024-10-01 02:00:44.257077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:04.438 null0 00:43:04.438 [2024-10-01 02:00:44.289119] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:04.438 [2024-10-01 02:00:44.289658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:04.696 02:00:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:04.696 1072381166 00:43:04.696 02:00:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:04.696 233718241 00:43:04.696 02:00:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1144602 00:43:04.696 02:00:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:04.696 02:00:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1144602 /var/tmp/bperf.sock 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1144602 ']' 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:04.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:04.696 02:00:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:04.696 [2024-10-01 02:00:44.357854] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:43:04.696 [2024-10-01 02:00:44.357932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144602 ] 00:43:04.696 [2024-10-01 02:00:44.420675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.696 [2024-10-01 02:00:44.511417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:04.954 02:00:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:04.954 02:00:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:43:04.954 02:00:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:04.954 02:00:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:05.211 02:00:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:05.211 02:00:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:05.470 02:00:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:05.470 02:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:05.728 [2024-10-01 02:00:45.460167] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:05.728 nvme0n1 00:43:05.728 02:00:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:05.728 02:00:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:05.728 02:00:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:05.728 02:00:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:05.728 02:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:05.728 02:00:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:05.985 02:00:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:05.985 02:00:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:05.985 02:00:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:05.985 02:00:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:05.986 02:00:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:05.986 02:00:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:05.986 02:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@25 -- # sn=1072381166 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 1072381166 == \1\0\7\2\3\8\1\1\6\6 ]] 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1072381166 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:06.551 02:00:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:06.551 Running I/O for 1 seconds... 00:43:07.484 6466.00 IOPS, 25.26 MiB/s 00:43:07.484 Latency(us) 00:43:07.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.484 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:07.484 nvme0n1 : 1.02 6496.39 25.38 0.00 0.00 19570.22 7330.32 27962.03 00:43:07.484 =================================================================================================================== 00:43:07.484 Total : 6496.39 25.38 0.00 0.00 19570.22 7330.32 27962.03 00:43:07.484 { 00:43:07.484 "results": [ 00:43:07.484 { 00:43:07.484 "job": "nvme0n1", 00:43:07.484 "core_mask": "0x2", 00:43:07.484 "workload": "randread", 00:43:07.484 "status": "finished", 00:43:07.484 "queue_depth": 128, 00:43:07.484 "io_size": 4096, 00:43:07.484 "runtime": 1.015179, 00:43:07.484 "iops": 6496.391276809311, 00:43:07.484 "mibps": 25.376528425036373, 00:43:07.484 "io_failed": 0, 00:43:07.484 "io_timeout": 0, 00:43:07.484 "avg_latency_us": 19570.220352343247, 00:43:07.484 "min_latency_us": 7330.322962962963, 00:43:07.484 "max_latency_us": 27962.02666666667 00:43:07.484 } 00:43:07.484 ], 00:43:07.484 "core_count": 1 00:43:07.484 } 00:43:07.484 02:00:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:07.484 02:00:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:07.742 02:00:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:07.742 02:00:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:07.742 02:00:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:07.742 02:00:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:07.742 02:00:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.742 02:00:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:08.000 02:00:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:08.000 02:00:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:08.000 02:00:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:08.000 02:00:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:08.000 02:00:47 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:08.000 02:00:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:08.258 [2024-10-01 02:00:48.046577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:08.258 [2024-10-01 02:00:48.047079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x625ea0 (107): Transport endpoint is not connected 00:43:08.258 [2024-10-01 02:00:48.048064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x625ea0 (9): Bad file descriptor 00:43:08.258 [2024-10-01 02:00:48.049063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:08.258 [2024-10-01 02:00:48.049088] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:08.258 [2024-10-01 02:00:48.049109] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:08.258 [2024-10-01 02:00:48.049134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:08.258 request: 00:43:08.258 { 00:43:08.258 "name": "nvme0", 00:43:08.258 "trtype": "tcp", 00:43:08.258 "traddr": "127.0.0.1", 00:43:08.258 "adrfam": "ipv4", 00:43:08.258 "trsvcid": "4420", 00:43:08.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:08.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:08.258 "prchk_reftag": false, 00:43:08.258 "prchk_guard": false, 00:43:08.258 "hdgst": false, 00:43:08.258 "ddgst": false, 00:43:08.258 "psk": ":spdk-test:key1", 00:43:08.258 "allow_unrecognized_csi": false, 00:43:08.258 "method": "bdev_nvme_attach_controller", 00:43:08.258 "req_id": 1 00:43:08.258 } 00:43:08.258 Got JSON-RPC error response 00:43:08.258 response: 00:43:08.258 { 00:43:08.258 "code": -5, 00:43:08.258 "message": "Input/output error" 00:43:08.258 } 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@33 -- # sn=1072381166 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1072381166 00:43:08.258 1 links removed 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@33 -- # sn=233718241 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 233718241 00:43:08.258 1 links removed 00:43:08.258 02:00:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1144602 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1144602 ']' 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1144602 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1144602 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:08.258 02:00:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:08.259 02:00:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1144602' 00:43:08.259 killing process with pid 1144602 00:43:08.259 02:00:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 1144602 00:43:08.259 Received shutdown signal, test time was about 1.000000 seconds 00:43:08.259 00:43:08.259 Latency(us) 00:43:08.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.259 =================================================================================================================== 00:43:08.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:08.259 02:00:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 1144602 00:43:08.517 02:00:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1144566 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1144566 ']' 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1144566 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1144566 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1144566' 00:43:08.517 killing process with pid 1144566 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 1144566 00:43:08.517 02:00:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 1144566 00:43:09.084 00:43:09.084 real 0m5.238s 00:43:09.084 user 0m9.938s 00:43:09.084 sys 0m1.693s 00:43:09.084 02:00:48 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:09.084 02:00:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:09.084 ************************************ 00:43:09.084 END TEST keyring_linux 00:43:09.084 ************************************ 00:43:09.084 02:00:48 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:09.084 02:00:48 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:09.084 02:00:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:09.084 02:00:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:09.084 02:00:48 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:09.084 02:00:48 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:09.084 02:00:48 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:09.084 02:00:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:09.084 02:00:48 -- common/autotest_common.sh@10 -- # set +x 00:43:09.084 02:00:48 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:09.084 02:00:48 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:09.084 02:00:48 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:09.084 02:00:48 -- common/autotest_common.sh@10 -- # set +x 00:43:10.987 INFO: APP EXITING 00:43:10.987 INFO: killing all VMs 00:43:10.987 INFO: killing vhost app 00:43:10.987 INFO: EXIT DONE 00:43:12.362 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:12.362 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:12.362 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:12.362 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:12.362 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:12.362 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:12.362 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:12.362 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:12.362 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:12.362 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:12.362 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:12.362 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:12.362 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:12.362 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:12.362 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:12.362 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:12.363 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:13.739 Cleaning 00:43:13.739 Removing: /var/run/dpdk/spdk0/config 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:13.739 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:13.739 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:13.739 Removing: /var/run/dpdk/spdk1/config 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:13.739 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:13.739 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:13.739 Removing: /var/run/dpdk/spdk2/config 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:13.739 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:13.739 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:13.739 Removing: /var/run/dpdk/spdk3/config 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:13.739 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:13.739 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:13.739 Removing: /var/run/dpdk/spdk4/config 00:43:13.739 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:13.739 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:13.740 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:13.740 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:13.740 Removing: /dev/shm/bdev_svc_trace.1 00:43:13.740 Removing: /dev/shm/nvmf_trace.0 00:43:13.740 Removing: /dev/shm/spdk_tgt_trace.pid761942 00:43:13.740 Removing: /var/run/dpdk/spdk0 00:43:13.740 Removing: /var/run/dpdk/spdk1 00:43:13.740 Removing: /var/run/dpdk/spdk2 00:43:13.740 Removing: /var/run/dpdk/spdk3 00:43:13.740 Removing: /var/run/dpdk/spdk4 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1002814 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1002949 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1003091 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1003353 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1003446 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1004554 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1006348 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1007527 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1008701 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1009883 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1011121 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1014873 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1015327 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1016606 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1017462 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1021180 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1023048 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1026472 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1030042 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1037146 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1041532 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1041620 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1054260 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1054703 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1055193 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1055603 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1056180 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1056589 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1057004 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1057517 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1060026 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1060176 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1063966 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1064119 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1067377 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1070257 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1077506 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1077910 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1080302 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1080577 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1083091 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1086882 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1088928 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1095296 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1100498 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1101790 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1102451 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1113130 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1115380 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1117385 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1122431 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1122540 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1125451 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1126798 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1128249 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1128997 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1130393 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1131268 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1136602 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1137055 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1137454 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1139508 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1139786 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1140178 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1142629 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1142644 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1144107 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1144566 00:43:13.740 Removing: /var/run/dpdk/spdk_pid1144602 00:43:13.740 Removing: /var/run/dpdk/spdk_pid760253 00:43:13.740 Removing: /var/run/dpdk/spdk_pid760992 00:43:13.740 Removing: /var/run/dpdk/spdk_pid761942 00:43:13.740 Removing: /var/run/dpdk/spdk_pid762386 00:43:13.740 Removing: /var/run/dpdk/spdk_pid763020 00:43:13.740 Removing: /var/run/dpdk/spdk_pid763157 00:43:13.740 Removing: /var/run/dpdk/spdk_pid763895 00:43:13.740 Removing: /var/run/dpdk/spdk_pid763947 00:43:13.740 Removing: /var/run/dpdk/spdk_pid764244 00:43:13.740 Removing: /var/run/dpdk/spdk_pid766143 00:43:13.740 Removing: /var/run/dpdk/spdk_pid767199 00:43:13.740 Removing: /var/run/dpdk/spdk_pid767394 00:43:13.740 Removing: /var/run/dpdk/spdk_pid767600 00:43:13.740 Removing: /var/run/dpdk/spdk_pid767923 00:43:13.740 Removing: /var/run/dpdk/spdk_pid768127 00:43:13.740 Removing: /var/run/dpdk/spdk_pid768282 00:43:13.740 Removing: /var/run/dpdk/spdk_pid768440 00:43:13.740 Removing: /var/run/dpdk/spdk_pid768636 00:43:13.740 Removing: /var/run/dpdk/spdk_pid769080 00:43:13.740 Removing: /var/run/dpdk/spdk_pid771568 00:43:13.740 Removing: /var/run/dpdk/spdk_pid771738 00:43:13.740 Removing: /var/run/dpdk/spdk_pid771898 00:43:13.740 Removing: /var/run/dpdk/spdk_pid772020 00:43:13.740 Removing: /var/run/dpdk/spdk_pid772332 00:43:13.740 Removing: /var/run/dpdk/spdk_pid772462 00:43:13.740 Removing: /var/run/dpdk/spdk_pid772783 00:43:13.740 Removing: /var/run/dpdk/spdk_pid772893 00:43:13.740 Removing: /var/run/dpdk/spdk_pid773087 00:43:13.740 Removing: /var/run/dpdk/spdk_pid773099 00:43:13.740 Removing: /var/run/dpdk/spdk_pid773376 00:43:13.740 Removing: /var/run/dpdk/spdk_pid773392 00:43:13.740 Removing: /var/run/dpdk/spdk_pid773855 00:43:13.740 Removing: /var/run/dpdk/spdk_pid774042 00:43:13.740 Removing: /var/run/dpdk/spdk_pid774247 00:43:13.740 Removing: /var/run/dpdk/spdk_pid776369 00:43:13.740 Removing: /var/run/dpdk/spdk_pid778995 00:43:13.740 Removing: /var/run/dpdk/spdk_pid786114 00:43:13.740 Removing: /var/run/dpdk/spdk_pid786522 00:43:13.740 Removing: /var/run/dpdk/spdk_pid789049 00:43:13.740 Removing: /var/run/dpdk/spdk_pid789214 00:43:13.740 Removing: /var/run/dpdk/spdk_pid791850 00:43:13.740 Removing: /var/run/dpdk/spdk_pid795573 00:43:13.999 Removing: /var/run/dpdk/spdk_pid798386 00:43:13.999 Removing: /var/run/dpdk/spdk_pid804807 00:43:13.999 Removing: /var/run/dpdk/spdk_pid810051 00:43:13.999 Removing: /var/run/dpdk/spdk_pid811364 00:43:13.999 Removing: /var/run/dpdk/spdk_pid812038 00:43:13.999 Removing: /var/run/dpdk/spdk_pid822298 00:43:13.999 Removing: /var/run/dpdk/spdk_pid824587 00:43:13.999 Removing: /var/run/dpdk/spdk_pid879708 00:43:13.999 Removing: /var/run/dpdk/spdk_pid882991 00:43:13.999 Removing: /var/run/dpdk/spdk_pid886827 00:43:13.999 Removing: /var/run/dpdk/spdk_pid890795 00:43:13.999 Removing: /var/run/dpdk/spdk_pid890797 00:43:13.999 Removing: /var/run/dpdk/spdk_pid891453 00:43:13.999 Removing: /var/run/dpdk/spdk_pid892159 00:43:13.999 Removing: /var/run/dpdk/spdk_pid892767 00:43:13.999 Removing: /var/run/dpdk/spdk_pid893668 00:43:13.999 Removing: /var/run/dpdk/spdk_pid893786 00:43:13.999 Removing: /var/run/dpdk/spdk_pid893930 00:43:13.999 Removing: /var/run/dpdk/spdk_pid894060 00:43:13.999 Removing: /var/run/dpdk/spdk_pid894070 00:43:13.999 Removing: /var/run/dpdk/spdk_pid894717 00:43:13.999 Removing: /var/run/dpdk/spdk_pid895254 00:43:13.999 Removing: /var/run/dpdk/spdk_pid895915 00:43:13.999 Removing: /var/run/dpdk/spdk_pid896309 00:43:13.999 Removing: /var/run/dpdk/spdk_pid896316 00:43:13.999 Removing: /var/run/dpdk/spdk_pid896573 00:43:13.999 Removing: /var/run/dpdk/spdk_pid897470 00:43:13.999 Removing: /var/run/dpdk/spdk_pid898203 00:43:13.999 Removing: /var/run/dpdk/spdk_pid903529 00:43:13.999 Removing: /var/run/dpdk/spdk_pid932176 00:43:13.999 Removing: /var/run/dpdk/spdk_pid935090 00:43:13.999 Removing: /var/run/dpdk/spdk_pid936164 00:43:13.999 Removing: /var/run/dpdk/spdk_pid937475 00:43:13.999 Removing: /var/run/dpdk/spdk_pid937606 00:43:13.999 Removing: /var/run/dpdk/spdk_pid937752 00:43:13.999 Removing: /var/run/dpdk/spdk_pid937893 00:43:13.999 Removing: /var/run/dpdk/spdk_pid938445 00:43:13.999 Removing: /var/run/dpdk/spdk_pid939735 00:43:13.999 Removing: /var/run/dpdk/spdk_pid940511 00:43:13.999 Removing: /var/run/dpdk/spdk_pid940963 00:43:13.999 Removing: /var/run/dpdk/spdk_pid943214 00:43:13.999 Removing: /var/run/dpdk/spdk_pid943601 00:43:13.999 Removing: /var/run/dpdk/spdk_pid944160 00:43:13.999 Removing: /var/run/dpdk/spdk_pid946551 00:43:13.999 Removing: /var/run/dpdk/spdk_pid949959 00:43:13.999 Removing: /var/run/dpdk/spdk_pid949960 00:43:13.999 Removing: /var/run/dpdk/spdk_pid949961 00:43:13.999 Removing: /var/run/dpdk/spdk_pid952182 00:43:13.999 Removing: /var/run/dpdk/spdk_pid954381 00:43:13.999 Removing: /var/run/dpdk/spdk_pid957906 00:43:13.999 Removing: /var/run/dpdk/spdk_pid980840 00:43:13.999 Removing: /var/run/dpdk/spdk_pid983606 00:43:13.999 Removing: /var/run/dpdk/spdk_pid987382 00:43:13.999 Removing: /var/run/dpdk/spdk_pid988318 00:43:13.999 Removing: /var/run/dpdk/spdk_pid989405 00:43:13.999 Removing: /var/run/dpdk/spdk_pid990487 00:43:13.999 Removing: /var/run/dpdk/spdk_pid993320 00:43:13.999 Removing: /var/run/dpdk/spdk_pid995681 00:43:13.999 Removing: /var/run/dpdk/spdk_pid999917 00:43:13.999 Removing: /var/run/dpdk/spdk_pid999920 00:43:13.999 Clean 00:43:13.999 02:00:53 -- common/autotest_common.sh@1451 -- # return 0 00:43:13.999 02:00:53 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:13.999 02:00:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:13.999 02:00:53 -- common/autotest_common.sh@10 -- # set +x 00:43:13.999 02:00:53 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:13.999 02:00:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:13.999 02:00:53 -- common/autotest_common.sh@10 -- # set +x 00:43:14.258 02:00:53 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:14.258 02:00:53 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:14.258 02:00:53 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:14.258 02:00:53 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:14.258 02:00:53 -- spdk/autotest.sh@394 -- # hostname 00:43:14.258 02:00:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:14.258 geninfo: WARNING: invalid characters removed from testname! 00:43:46.327 02:01:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:49.652 02:01:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:52.962 02:01:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:55.493 02:01:35 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:58.778 02:01:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:02.062 02:01:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:04.594 02:01:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:04.594 02:01:44 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:44:04.594 02:01:44 -- common/autotest_common.sh@1681 -- $ lcov --version 00:44:04.594 02:01:44 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:44:04.594 02:01:44 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:44:04.594 02:01:44 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:44:04.594 02:01:44 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:44:04.594 02:01:44 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:44:04.594 02:01:44 -- scripts/common.sh@336 -- $ IFS=.-: 00:44:04.594 02:01:44 -- scripts/common.sh@336 -- $ read -ra ver1 00:44:04.594 02:01:44 -- scripts/common.sh@337 -- $ IFS=.-: 00:44:04.594 02:01:44 -- scripts/common.sh@337 -- $ read -ra ver2 00:44:04.594 02:01:44 -- scripts/common.sh@338 -- $ local 'op=<' 00:44:04.594 02:01:44 -- scripts/common.sh@340 -- $ ver1_l=2 00:44:04.594 02:01:44 -- scripts/common.sh@341 -- $ ver2_l=1 00:44:04.594 02:01:44 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:44:04.594 02:01:44 -- scripts/common.sh@344 -- $ case "$op" in 00:44:04.594 02:01:44 -- scripts/common.sh@345 -- $ : 1 00:44:04.594 02:01:44 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:44:04.594 02:01:44 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:04.594 02:01:44 -- scripts/common.sh@365 -- $ decimal 1 00:44:04.594 02:01:44 -- scripts/common.sh@353 -- $ local d=1 00:44:04.594 02:01:44 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:44:04.594 02:01:44 -- scripts/common.sh@355 -- $ echo 1 00:44:04.594 02:01:44 -- scripts/common.sh@365 -- $ ver1[v]=1 00:44:04.594 02:01:44 -- scripts/common.sh@366 -- $ decimal 2 00:44:04.594 02:01:44 -- scripts/common.sh@353 -- $ local d=2 00:44:04.594 02:01:44 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:44:04.594 02:01:44 -- scripts/common.sh@355 -- $ echo 2 00:44:04.594 02:01:44 -- scripts/common.sh@366 -- $ ver2[v]=2 00:44:04.594 02:01:44 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:44:04.594 02:01:44 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:44:04.594 02:01:44 -- scripts/common.sh@368 -- $ return 0 00:44:04.594 02:01:44 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:04.594 02:01:44 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:44:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:04.594 --rc genhtml_branch_coverage=1 00:44:04.594 --rc genhtml_function_coverage=1 00:44:04.594 --rc genhtml_legend=1 00:44:04.594 --rc geninfo_all_blocks=1 00:44:04.594 --rc geninfo_unexecuted_blocks=1 00:44:04.594 00:44:04.594 ' 00:44:04.594 02:01:44 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:44:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:04.594 --rc genhtml_branch_coverage=1 00:44:04.594 --rc genhtml_function_coverage=1 00:44:04.594 --rc genhtml_legend=1 00:44:04.594 --rc geninfo_all_blocks=1 00:44:04.594 --rc geninfo_unexecuted_blocks=1 00:44:04.594 00:44:04.594 ' 00:44:04.594 02:01:44 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:44:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:04.594 --rc genhtml_branch_coverage=1 00:44:04.594 --rc genhtml_function_coverage=1 00:44:04.594 --rc genhtml_legend=1 00:44:04.594 --rc geninfo_all_blocks=1 00:44:04.594 --rc geninfo_unexecuted_blocks=1 00:44:04.594 00:44:04.594 ' 00:44:04.594 02:01:44 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:44:04.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:04.594 --rc genhtml_branch_coverage=1 00:44:04.594 --rc genhtml_function_coverage=1 00:44:04.594 --rc genhtml_legend=1 00:44:04.594 --rc geninfo_all_blocks=1 00:44:04.594 --rc geninfo_unexecuted_blocks=1 00:44:04.594 00:44:04.594 ' 00:44:04.594 02:01:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:04.594 02:01:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:44:04.594 02:01:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:04.594 02:01:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:04.594 02:01:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:04.594 02:01:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:04.594 02:01:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:04.595 02:01:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:04.595 02:01:44 -- paths/export.sh@5 -- $ export PATH 00:44:04.595 02:01:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:04.595 02:01:44 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:04.595 02:01:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:44:04.595 02:01:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727740904.XXXXXX 00:44:04.595 02:01:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727740904.43LBb0 00:44:04.595 02:01:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:44:04.595 02:01:44 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:44:04.595 02:01:44 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:44:04.595 02:01:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:44:04.595 02:01:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:04.595 02:01:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:04.595 02:01:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:44:04.595 02:01:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:44:04.595 02:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:44:04.595 02:01:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:44:04.595 02:01:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:44:04.595 02:01:44 -- pm/common@17 -- $ local monitor 00:44:04.595 02:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:04.595 02:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:04.595 02:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:04.595 02:01:44 -- pm/common@21 -- $ date +%s 00:44:04.595 02:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:04.595 02:01:44 -- pm/common@21 -- $ date +%s 00:44:04.595 02:01:44 -- pm/common@25 -- $ sleep 1 00:44:04.595 02:01:44 -- pm/common@21 -- $ date +%s 00:44:04.595 02:01:44 -- pm/common@21 -- $ date +%s 00:44:04.595 02:01:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727740904 00:44:04.595 02:01:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727740904 00:44:04.595 02:01:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727740904 00:44:04.595 02:01:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727740904 00:44:04.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727740904_collect-cpu-load.pm.log 00:44:04.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727740904_collect-vmstat.pm.log 00:44:04.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727740904_collect-cpu-temp.pm.log 00:44:04.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727740904_collect-bmc-pm.bmc.pm.log 00:44:05.533 02:01:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:44:05.533 02:01:45 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:44:05.533 02:01:45 -- spdk/autopackage.sh@14 -- $ timing_finish 00:44:05.533 02:01:45 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:05.533 02:01:45 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:05.533 02:01:45 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:05.792 02:01:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:05.792 02:01:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:05.792 02:01:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:05.792 02:01:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:05.792 02:01:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:05.792 02:01:45 -- pm/common@44 -- $ pid=1156886 00:44:05.792 02:01:45 -- pm/common@50 -- $ kill -TERM 1156886 00:44:05.792 02:01:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:05.792 02:01:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:05.792 02:01:45 -- pm/common@44 -- $ pid=1156888 00:44:05.792 02:01:45 -- pm/common@50 -- $ kill -TERM 1156888 00:44:05.792 02:01:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:05.792 02:01:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:05.792 02:01:45 -- pm/common@44 -- $ pid=1156890 00:44:05.792 02:01:45 -- pm/common@50 -- $ kill -TERM 1156890 00:44:05.792 02:01:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:05.792 02:01:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:05.792 02:01:45 -- pm/common@44 -- $ pid=1156920 00:44:05.792 02:01:45 -- pm/common@50 -- $ sudo -E kill -TERM 1156920 00:44:05.792 + [[ -n 667222 ]] 00:44:05.792 + sudo kill 667222 00:44:05.804 [Pipeline] } 00:44:05.819 [Pipeline] // stage 00:44:05.825 [Pipeline] } 00:44:05.839 [Pipeline] // timeout 00:44:05.845 [Pipeline] } 00:44:05.859 [Pipeline] // catchError 00:44:05.864 [Pipeline] } 00:44:05.881 [Pipeline] // wrap 00:44:05.887 [Pipeline] } 00:44:05.900 [Pipeline] // catchError 00:44:05.909 [Pipeline] stage 00:44:05.911 [Pipeline] { (Epilogue) 00:44:05.924 [Pipeline] catchError 00:44:05.926 [Pipeline] { 00:44:05.938 [Pipeline] echo 00:44:05.940 Cleanup processes 00:44:05.946 [Pipeline] sh 00:44:06.232 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:06.232 1157091 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:06.232 1157198 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:06.246 [Pipeline] sh 00:44:06.531 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:06.531 ++ grep -v 'sudo pgrep' 00:44:06.531 ++ awk '{print $1}' 00:44:06.531 + sudo kill -9 1157091 00:44:06.542 [Pipeline] sh 00:44:06.825 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:19.131 [Pipeline] sh 00:44:19.417 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:19.417 Artifacts sizes are good 00:44:19.430 [Pipeline] archiveArtifacts 00:44:19.436 Archiving artifacts 00:44:19.644 [Pipeline] sh 00:44:19.928 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:19.943 [Pipeline] cleanWs 00:44:19.954 [WS-CLEANUP] Deleting project workspace... 00:44:19.954 [WS-CLEANUP] Deferred wipeout is used... 00:44:19.962 [WS-CLEANUP] done 00:44:19.964 [Pipeline] } 00:44:19.980 [Pipeline] // catchError 00:44:19.992 [Pipeline] sh 00:44:20.273 + logger -p user.info -t JENKINS-CI 00:44:20.283 [Pipeline] } 00:44:20.297 [Pipeline] // stage 00:44:20.302 [Pipeline] } 00:44:20.316 [Pipeline] // node 00:44:20.321 [Pipeline] End of Pipeline 00:44:20.366 Finished: SUCCESS